Test Report: Hyper-V_Windows 20385

                    
                      693540c0733dd51efa818bcfa77a0c31e0bd95f4:2025-02-10:38290
                    
                

Test fail (10/214)

x
+
TestErrorSpam/setup (186.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-637900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-637900 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 --driver=hyperv: (3m6.0558917s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-637900] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=20385
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-637900" primary control-plane node in "nospam-637900" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-637900" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (186.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 service --namespace=default --https --url hello-node: exit status 1 (15.0668634s)
functional_test.go:1528: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-970000 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url --format={{.IP}}: exit status 1 (15.0652283s)
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1565: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url: exit status 1 (15.011296s)
functional_test.go:1578: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-970000 service hello-node --url": exit status 1
functional_test.go:1582: found endpoint for hello-node: 
functional_test.go:1590: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (65.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- sh -c "ping -c 1 172.29.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- sh -c "ping -c 1 172.29.128.1": exit status 1 (10.4510925s)

                                                
                                                
-- stdout --
	PING 172.29.128.1 (172.29.128.1): 56 data bytes
	
	--- 172.29.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.128.1) from pod (busybox-58667487b6-5px7z): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- sh -c "ping -c 1 172.29.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- sh -c "ping -c 1 172.29.128.1": exit status 1 (10.4592706s)

                                                
                                                
-- stdout --
	PING 172.29.128.1 (172.29.128.1): 56 data bytes
	
	--- 172.29.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.128.1) from pod (busybox-58667487b6-r8blr): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- sh -c "ping -c 1 172.29.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- sh -c "ping -c 1 172.29.128.1": exit status 1 (10.4613095s)

                                                
                                                
-- stdout --
	PING 172.29.128.1 (172.29.128.1): 56 data bytes
	
	--- 172.29.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.128.1) from pod (busybox-58667487b6-vq9s4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-335100 -n ha-335100
E0210 11:08:55.637754   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-335100 -n ha-335100: (11.4361111s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 logs -n 25: (8.2376999s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-970000                    | functional-970000 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:53 UTC | 10 Feb 25 10:53 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-970000 image build -t     | functional-970000 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:53 UTC | 10 Feb 25 10:53 UTC |
	|         | localhost/my-image:functional-970000 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-970000 image ls           | functional-970000 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:53 UTC | 10 Feb 25 10:54 UTC |
	| delete  | -p functional-970000                 | functional-970000 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:55 UTC | 10 Feb 25 10:56 UTC |
	| start   | -p ha-335100 --wait=true             | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:56 UTC | 10 Feb 25 11:07 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- apply -f             | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- rollout status       | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- get pods -o          | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- get pods -o          | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-5px7z --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-r8blr --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-vq9s4 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-5px7z --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-r8blr --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-vq9s4 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-5px7z -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-r8blr -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-vq9s4 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- get pods -o          | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-5px7z             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC |                     |
	|         | busybox-58667487b6-5px7z -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-r8blr             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC |                     |
	|         | busybox-58667487b6-r8blr -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
	|         | busybox-58667487b6-vq9s4             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-335100 -- exec                 | ha-335100         | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:08 UTC |                     |
	|         | busybox-58667487b6-vq9s4 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.128.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:56:50
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:56:49.955540    8716 out.go:345] Setting OutFile to fd 1996 ...
	I0210 10:56:50.006508    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:56:50.006508    8716 out.go:358] Setting ErrFile to fd 1984...
	I0210 10:56:50.006508    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:56:50.024054    8716 out.go:352] Setting JSON to false
	I0210 10:56:50.027021    8716 start.go:129] hostinfo: {"hostname":"minikube5","uptime":186349,"bootTime":1738998660,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:56:50.027478    8716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:56:50.035049    8716 out.go:177] * [ha-335100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:56:50.039906    8716 notify.go:220] Checking for updates...
	I0210 10:56:50.039906    8716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:56:50.041984    8716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:56:50.044508    8716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:56:50.046572    8716 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:56:50.047895    8716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:56:50.050873    8716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:56:55.110640    8716 out.go:177] * Using the hyperv driver based on user configuration
	I0210 10:56:55.115381    8716 start.go:297] selected driver: hyperv
	I0210 10:56:55.115381    8716 start.go:901] validating driver "hyperv" against <nil>
	I0210 10:56:55.115381    8716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:56:55.158791    8716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:56:55.160028    8716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 10:56:55.160028    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:56:55.160028    8716 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0210 10:56:55.160028    8716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 10:56:55.160674    8716 start.go:340] cluster config:
	{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0210 10:56:55.160674    8716 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:56:55.167125    8716 out.go:177] * Starting "ha-335100" primary control-plane node in "ha-335100" cluster
	I0210 10:56:55.169736    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:56:55.169736    8716 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 10:56:55.169736    8716 cache.go:56] Caching tarball of preloaded images
	I0210 10:56:55.169736    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 10:56:55.170719    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 10:56:55.170888    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:56:55.171297    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json: {Name:mk7fd8b1cba562e1df25fb8b2e8a3cb78306b0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:56:55.171861    8716 start.go:360] acquireMachinesLock for ha-335100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 10:56:55.171861    8716 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100"
	I0210 10:56:55.172521    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:56:55.172521    8716 start.go:125] createHost starting for "" (driver="hyperv")
	I0210 10:56:55.175257    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 10:56:55.175590    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 10:56:55.175662    8716 client.go:168] LocalClient.Create starting
	I0210 10:56:55.176122    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 10:56:55.176349    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 10:56:57.116565    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 10:56:57.117582    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:56:57.117942    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 10:56:58.697687    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 10:56:58.697687    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:56:58.698651    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 10:57:00.114245    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 10:57:00.114854    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:00.114854    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 10:57:03.469545    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 10:57:03.469545    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:03.471404    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 10:57:03.881780    8716 main.go:141] libmachine: Creating SSH key...
	I0210 10:57:04.097693    8716 main.go:141] libmachine: Creating VM...
	I0210 10:57:04.097693    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 10:57:06.679314    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 10:57:06.679314    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:06.679892    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 10:57:06.680149    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 10:57:08.327377    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 10:57:08.327377    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:08.328509    8716 main.go:141] libmachine: Creating VHD
	I0210 10:57:08.328606    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 10:57:11.873284    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A514F38A-CFB8-4A84-B862-9E0C60ED9E44
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 10:57:11.873521    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:11.873521    8716 main.go:141] libmachine: Writing magic tar header
	I0210 10:57:11.873521    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 10:57:11.886984    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 10:57:14.914284    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:14.914284    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:14.914689    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd' -SizeBytes 20000MB
	I0210 10:57:17.312226    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:17.312226    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:17.312918    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 10:57:20.689908    8716 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-335100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 10:57:20.689908    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:20.690640    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100 -DynamicMemoryEnabled $false
	I0210 10:57:22.733029    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:22.733346    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:22.733462    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100 -Count 2
	I0210 10:57:24.715450    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:24.715450    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:24.716089    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\boot2docker.iso'
	I0210 10:57:26.981675    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:26.981747    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:26.981747    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd'
	I0210 10:57:29.347156    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:29.347389    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:29.347389    8716 main.go:141] libmachine: Starting VM...
	I0210 10:57:29.347517    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100
	I0210 10:57:32.230804    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:32.231308    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:32.231361    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 10:57:32.231361    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:34.318051    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:34.318096    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:34.318149    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:36.595079    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:36.595079    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:37.595923    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:39.570719    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:39.570803    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:39.570803    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:41.849171    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:41.849388    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:42.849933    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:47.116622    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:47.116622    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:48.117607    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:50.090626    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:50.090626    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:50.091211    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:52.361772    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:52.361772    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:53.362207    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:59.763320    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:59.763320    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:59.763593    8716 machine.go:93] provisionDockerMachine start ...
	I0210 10:57:59.763696    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:01.754336    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:01.755200    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:01.755282    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:04.135002    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:04.135987    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:04.142090    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:04.159249    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:04.159249    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 10:58:04.305556    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 10:58:04.305556    8716 buildroot.go:166] provisioning hostname "ha-335100"
	I0210 10:58:04.305556    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:06.268174    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:06.268174    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:06.268921    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:08.612126    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:08.612370    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:08.619254    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:08.619980    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:08.619980    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100 && echo "ha-335100" | sudo tee /etc/hostname
	I0210 10:58:08.782655    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100
	
	I0210 10:58:08.782750    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:13.065985    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:13.066493    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:13.069979    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:13.069979    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:13.069979    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 10:58:13.229041    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 10:58:13.229103    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 10:58:13.229143    8716 buildroot.go:174] setting up certificates
	I0210 10:58:13.229143    8716 provision.go:84] configureAuth start
	I0210 10:58:13.229219    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:15.201499    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:15.201499    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:15.202522    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:19.535277    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:19.535677    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:19.535731    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:21.955772    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:21.956091    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:21.956091    8716 provision.go:143] copyHostCerts
	I0210 10:58:21.956091    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 10:58:21.956091    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 10:58:21.956091    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 10:58:21.956744    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 10:58:21.957356    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 10:58:21.957912    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 10:58:21.957912    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 10:58:21.957912    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 10:58:21.959192    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 10:58:21.959192    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 10:58:21.959192    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 10:58:21.959192    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 10:58:21.960468    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100 san=[127.0.0.1 172.29.136.99 ha-335100 localhost minikube]
	I0210 10:58:22.168319    8716 provision.go:177] copyRemoteCerts
	I0210 10:58:22.176962    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 10:58:22.177039    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:24.217624    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:24.218257    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:24.218291    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:26.545431    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:26.545431    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:26.546605    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:58:26.655565    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4784748s)
	I0210 10:58:26.655565    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 10:58:26.655565    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 10:58:26.708753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 10:58:26.709156    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0210 10:58:26.762611    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 10:58:26.762783    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 10:58:26.806257    8716 provision.go:87] duration metric: took 13.576959s to configureAuth
	I0210 10:58:26.806257    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 10:58:26.806257    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:58:26.806257    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:28.790480    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:28.791202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:28.791202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:31.141626    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:31.141626    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:31.146135    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:31.146593    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:31.146593    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 10:58:31.289137    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 10:58:31.289137    8716 buildroot.go:70] root file system type: tmpfs
	I0210 10:58:31.289675    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 10:58:31.289765    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:33.272853    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:33.272853    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:33.273190    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:35.641959    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:35.641959    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:35.646663    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:35.647281    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:35.647281    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 10:58:35.808375    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 10:58:35.808375    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:40.099084    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:40.099084    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:40.105034    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:40.105645    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:40.105645    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 10:58:42.309359    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 10:58:42.309359    8716 machine.go:96] duration metric: took 42.5452806s to provisionDockerMachine
	I0210 10:58:42.309359    8716 client.go:171] duration metric: took 1m47.1324811s to LocalClient.Create
	I0210 10:58:42.309359    8716 start.go:167] duration metric: took 1m47.1325532s to libmachine.API.Create "ha-335100"
	I0210 10:58:42.309879    8716 start.go:293] postStartSetup for "ha-335100" (driver="hyperv")
	I0210 10:58:42.309879    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 10:58:42.318476    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 10:58:42.318476    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:44.285506    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:44.285506    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:44.286286    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:46.621176    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:46.621176    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:46.622299    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:58:46.736316    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4177896s)
	I0210 10:58:46.745414    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 10:58:46.752589    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 10:58:46.752589    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 10:58:46.753118    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 10:58:46.753271    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 10:58:46.753271    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 10:58:46.761722    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 10:58:46.779300    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 10:58:46.823526    8716 start.go:296] duration metric: took 4.5135954s for postStartSetup
	I0210 10:58:46.825929    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:48.795567    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:48.795567    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:48.796625    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:51.153437    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:51.153437    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:51.153437    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:58:51.155836    8716 start.go:128] duration metric: took 1m55.9819985s to createHost
	I0210 10:58:51.156415    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:53.144379    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:53.144379    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:53.144461    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:55.503343    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:55.504211    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:55.509406    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:55.510018    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:55.510018    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 10:58:55.644975    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185135.659116284
	
	I0210 10:58:55.645087    8716 fix.go:216] guest clock: 1739185135.659116284
	I0210 10:58:55.645087    8716 fix.go:229] Guest: 2025-02-10 10:58:55.659116284 +0000 UTC Remote: 2025-02-10 10:58:51.1563566 +0000 UTC m=+121.275254101 (delta=4.502759684s)
	I0210 10:58:55.645195    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:00.038209    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:00.038209    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:00.043112    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:59:00.043526    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:59:00.043526    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185135
	I0210 10:59:00.194828    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 10:58:55 UTC 2025
	
	I0210 10:59:00.194943    8716 fix.go:236] clock set: Mon Feb 10 10:58:55 UTC 2025
	 (err=<nil>)
	I0210 10:59:00.194943    8716 start.go:83] releasing machines lock for "ha-335100", held for 2m5.0211267s
	I0210 10:59:00.195132    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:02.189954    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:02.190221    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:02.190292    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:04.625803    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:04.626817    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:04.630378    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 10:59:04.630570    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:04.641782    8716 ssh_runner.go:195] Run: cat /version.json
	I0210 10:59:04.641842    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:06.689031    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:06.689860    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:06.689922    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:09.162719    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:09.162719    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:09.163624    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:09.182845    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:09.182845    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:09.183060    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:09.256683    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6261945s)
	W0210 10:59:09.256683    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 10:59:09.289896    8716 ssh_runner.go:235] Completed: cat /version.json: (4.6479418s)
	I0210 10:59:09.298633    8716 ssh_runner.go:195] Run: systemctl --version
	I0210 10:59:09.316283    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 10:59:09.326007    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 10:59:09.333609    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 10:59:09.364481    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 10:59:09.364481    8716 start.go:495] detecting cgroup driver to use...
	I0210 10:59:09.364481    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0210 10:59:09.379663    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 10:59:09.379663    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 10:59:09.410962    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 10:59:09.438456    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 10:59:09.456894    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 10:59:09.467067    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 10:59:09.494400    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 10:59:09.522660    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 10:59:09.555431    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 10:59:09.591737    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 10:59:09.626140    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 10:59:09.652135    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 10:59:09.680515    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 10:59:09.709577    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 10:59:09.727534    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 10:59:09.736588    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 10:59:09.766002    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 10:59:09.790776    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:09.990519    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 10:59:10.023672    8716 start.go:495] detecting cgroup driver to use...
	I0210 10:59:10.033966    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 10:59:10.067941    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 10:59:10.097652    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 10:59:10.130951    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 10:59:10.163679    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 10:59:10.195416    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 10:59:10.257619    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 10:59:10.281530    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 10:59:10.325653    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 10:59:10.340042    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 10:59:10.357964    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 10:59:10.397324    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 10:59:10.581403    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 10:59:10.766445    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 10:59:10.766445    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 10:59:10.808095    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:10.991998    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 10:59:13.584046    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5920187s)
	I0210 10:59:13.594010    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 10:59:13.628419    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 10:59:13.663030    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 10:59:13.866975    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 10:59:14.076058    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:14.274741    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 10:59:14.312741    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 10:59:14.345477    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:14.533023    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 10:59:14.639593    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 10:59:14.652087    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 10:59:14.660866    8716 start.go:563] Will wait 60s for crictl version
	I0210 10:59:14.669372    8716 ssh_runner.go:195] Run: which crictl
	I0210 10:59:14.683464    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 10:59:14.733295    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 10:59:14.741350    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 10:59:14.788195    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 10:59:14.825672    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 10:59:14.825827    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 10:59:14.833774    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 10:59:14.833774    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 10:59:14.841599    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 10:59:14.849025    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:59:14.881251    8716 kubeadm.go:883] updating cluster {Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP
:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 10:59:14.881251    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:59:14.888252    8716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 10:59:14.912763    8716 docker.go:689] Got preloaded images: 
	I0210 10:59:14.912848    8716 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0210 10:59:14.922086    8716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 10:59:14.948910    8716 ssh_runner.go:195] Run: which lz4
	I0210 10:59:14.954626    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0210 10:59:14.962808    8716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 10:59:14.969016    8716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 10:59:14.969016    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0210 10:59:16.584956    8716 docker.go:653] duration metric: took 1.630006s to copy over tarball
	I0210 10:59:16.594103    8716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 10:59:24.865471    8716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.2712735s)
	I0210 10:59:24.865471    8716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 10:59:24.927579    8716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 10:59:24.946616    8716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0210 10:59:24.986964    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:25.198136    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 10:59:28.508921    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3107475s)
	I0210 10:59:28.517317    8716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 10:59:28.543793    8716 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0210 10:59:28.543793    8716 cache_images.go:84] Images are preloaded, skipping loading
	I0210 10:59:28.543793    8716 kubeadm.go:934] updating node { 172.29.136.99 8443 v1.32.1 docker true true} ...
	I0210 10:59:28.543793    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.136.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 10:59:28.550794    8716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0210 10:59:28.614933    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:59:28.614933    8716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 10:59:28.614933    8716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 10:59:28.614933    8716 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.136.99 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-335100 NodeName:ha-335100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.136.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.136.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 10:59:28.614933    8716 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.136.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-335100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.136.99"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.136.99"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 10:59:28.615466    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 10:59:28.623495    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 10:59:28.650192    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 10:59:28.650192    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0210 10:59:28.658132    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 10:59:28.681036    8716 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 10:59:28.689002    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0210 10:59:28.706362    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0210 10:59:28.735662    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 10:59:28.765214    8716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0210 10:59:28.793232    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0210 10:59:28.835901    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 10:59:28.842208    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:59:28.870270    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:29.058434    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 10:59:29.085471    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.136.99
	I0210 10:59:29.085471    8716 certs.go:194] generating shared ca certs ...
	I0210 10:59:29.085471    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.086955    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 10:59:29.087327    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 10:59:29.087492    8716 certs.go:256] generating profile certs ...
	I0210 10:59:29.088023    8716 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 10:59:29.088099    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt with IP's: []
	I0210 10:59:29.271791    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt ...
	I0210 10:59:29.271791    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt: {Name:mk5216f38f20912ed6052b5430faea59399f3f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.272789    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key ...
	I0210 10:59:29.272789    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key: {Name:mkd7b13c25fea812fc08569e68f3133c2241e105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.273735    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d
	I0210 10:59:29.273735    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.143.254]
	I0210 10:59:29.583944    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d ...
	I0210 10:59:29.583944    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d: {Name:mk23d7e42777d012abc45260df0ae3e0638e6bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.585043    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d ...
	I0210 10:59:29.585043    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d: {Name:mk7a397d6294b60b358f9a417a41bf9963d738ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.587055    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 10:59:29.606612    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 10:59:29.607755    8716 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 10:59:29.607861    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt with IP's: []
	I0210 10:59:30.014813    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt ...
	I0210 10:59:30.014813    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt: {Name:mk6598afd57f2b469b6b403a769e5e456fdaf7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:30.015813    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key ...
	I0210 10:59:30.015813    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key: {Name:mk1e06dc41c38271ae1612b06e338f09efab9113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:30.017248    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 10:59:30.018045    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 10:59:30.018045    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 10:59:30.018450    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 10:59:30.018634    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 10:59:30.018836    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 10:59:30.018836    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 10:59:30.032196    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 10:59:30.032812    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 10:59:30.033333    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 10:59:30.033973    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 10:59:30.033973    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 10:59:30.033973    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.034554    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.034554    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.035133    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 10:59:30.086775    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 10:59:30.126026    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 10:59:30.172847    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 10:59:30.217732    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 10:59:30.263137    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 10:59:30.306701    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 10:59:30.353203    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 10:59:30.399314    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 10:59:30.444062    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 10:59:30.489445    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 10:59:30.533890    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 10:59:30.572483    8716 ssh_runner.go:195] Run: openssl version
	I0210 10:59:30.589313    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 10:59:30.622759    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.630101    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.637366    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.655319    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 10:59:30.684292    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 10:59:30.712971    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.720207    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.727833    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.744631    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 10:59:30.771287    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 10:59:30.801361    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.808980    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.817908    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.836455    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 10:59:30.864607    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 10:59:30.871993    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 10:59:30.872135    8716 kubeadm.go:392] StartCluster: {Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:17
2.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:59:30.879497    8716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 10:59:30.922629    8716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 10:59:30.949375    8716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 10:59:30.977246    8716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 10:59:30.994169    8716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 10:59:30.994169    8716 kubeadm.go:157] found existing configuration files:
	
	I0210 10:59:31.003103    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 10:59:31.019342    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 10:59:31.028120    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 10:59:31.057235    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 10:59:31.074298    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 10:59:31.084160    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 10:59:31.110937    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 10:59:31.128495    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 10:59:31.136478    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 10:59:31.162126    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 10:59:31.179317    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 10:59:31.187189    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 10:59:31.204912    8716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 10:59:31.592529    8716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 10:59:45.973286    8716 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 10:59:45.973433    8716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 10:59:45.973603    8716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 10:59:45.973751    8716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 10:59:45.973989    8716 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 10:59:45.974211    8716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 10:59:45.978729    8716 out.go:235]   - Generating certificates and keys ...
	I0210 10:59:45.978729    8716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 10:59:45.978729    8716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-335100 localhost] and IPs [172.29.136.99 127.0.0.1 ::1]
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-335100 localhost] and IPs [172.29.136.99 127.0.0.1 ::1]
	I0210 10:59:45.980609    8716 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 10:59:45.981917    8716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 10:59:45.986098    8716 out.go:235]   - Booting up control plane ...
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 10:59:45.988226    8716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 10:59:45.988374    8716 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 10:59:45.988582    8716 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001790189s
	I0210 10:59:45.988716    8716 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 10:59:45.988876    8716 kubeadm.go:310] [api-check] The API server is healthy after 7.00287979s
	I0210 10:59:45.989027    8716 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 10:59:45.989236    8716 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 10:59:45.989467    8716 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 10:59:45.989659    8716 kubeadm.go:310] [mark-control-plane] Marking the node ha-335100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 10:59:45.989990    8716 kubeadm.go:310] [bootstrap-token] Using token: 5bp9g0.cru7k30qiv98fcl0
	I0210 10:59:45.998281    8716 out.go:235]   - Configuring RBAC rules ...
	I0210 10:59:45.999094    8716 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 10:59:45.999230    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 10:59:45.999230    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 10:59:46.000412    8716 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 10:59:46.000522    8716 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 10:59:46.000522    8716 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.000522    8716 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.000522    8716 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.001044    8716 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 10:59:46.001106    8716 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 10:59:46.001106    8716 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001106    8716 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001106    8716 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001627    8716 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 10:59:46.001790    8716 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 10:59:46.001790    8716 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 10:59:46.001790    8716 kubeadm.go:310] 
	I0210 10:59:46.001790    8716 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 10:59:46.001790    8716 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 10:59:46.001790    8716 kubeadm.go:310] 
	I0210 10:59:46.002384    8716 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5bp9g0.cru7k30qiv98fcl0 \
	I0210 10:59:46.002384    8716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 \
	I0210 10:59:46.002384    8716 kubeadm.go:310] 	--control-plane 
	I0210 10:59:46.002384    8716 kubeadm.go:310] 
	I0210 10:59:46.003055    8716 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 10:59:46.003055    8716 kubeadm.go:310] 
	I0210 10:59:46.003177    8716 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5bp9g0.cru7k30qiv98fcl0 \
	I0210 10:59:46.003177    8716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 10:59:46.003177    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:59:46.003177    8716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 10:59:46.011410    8716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 10:59:46.026147    8716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 10:59:46.034732    8716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 10:59:46.034852    8716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 10:59:46.082636    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 10:59:46.744093    8716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 10:59:46.753093    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100 minikube.k8s.io/updated_at=2025_02_10T10_59_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=true
	I0210 10:59:46.754094    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:46.764122    8716 ops.go:34] apiserver oom_adj: -16
	I0210 10:59:46.970456    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:47.473223    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:47.973417    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:48.472784    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:48.970999    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:49.472979    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:49.973094    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:50.471125    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:50.758721    8716 kubeadm.go:1113] duration metric: took 4.0145828s to wait for elevateKubeSystemPrivileges
	I0210 10:59:50.758721    8716 kubeadm.go:394] duration metric: took 19.8863595s to StartCluster
	I0210 10:59:50.758721    8716 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:50.758721    8716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:59:50.760831    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:50.762126    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 10:59:50.762126    8716 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:59:50.762233    8716 start.go:241] waiting for startup goroutines ...
	I0210 10:59:50.762233    8716 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 10:59:50.762571    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:59:50.762771    8716 addons.go:69] Setting default-storageclass=true in profile "ha-335100"
	I0210 10:59:50.762842    8716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-335100"
	I0210 10:59:50.763105    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:50.763105    8716 addons.go:69] Setting storage-provisioner=true in profile "ha-335100"
	I0210 10:59:50.763105    8716 addons.go:238] Setting addon storage-provisioner=true in "ha-335100"
	I0210 10:59:50.763735    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 10:59:50.765784    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:50.917208    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 10:59:51.208172    8716 start.go:971] {"host.minikube.internal": 172.29.128.1} host record injected into CoreDNS's ConfigMap
	I0210 10:59:52.827270    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:52.827370    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:52.829975    8716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 10:59:52.832541    8716 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:59:52.832573    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 10:59:52.832677    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:52.841454    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:52.841454    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:52.843380    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:59:52.844066    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 10:59:52.845970    8716 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 10:59:52.845970    8716 addons.go:238] Setting addon default-storageclass=true in "ha-335100"
	I0210 10:59:52.845970    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 10:59:52.846968    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:54.996514    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:54.996514    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:54.996717    8716 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 10:59:54.996780    8716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 10:59:54.996780    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:55.024215    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:55.024215    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:55.025026    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:57.043087    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:57.043233    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:57.043287    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:57.482918    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:57.482918    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:57.483918    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:57.628001    8716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:59:59.477460    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:59.477460    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:59.477819    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:59.616409    8716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 10:59:59.824967    8716 round_trippers.go:470] GET https://172.29.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0210 10:59:59.825005    8716 round_trippers.go:476] Request Headers:
	I0210 10:59:59.825043    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 10:59:59.825043    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 10:59:59.837942    8716 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0210 10:59:59.839309    8716 round_trippers.go:470] PUT https://172.29.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0210 10:59:59.839309    8716 round_trippers.go:476] Request Headers:
	I0210 10:59:59.839404    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 10:59:59.839404    8716 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 10:59:59.839404    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 10:59:59.844121    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 10:59:59.847996    8716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 10:59:59.850163    8716 addons.go:514] duration metric: took 9.0878266s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 10:59:59.850696    8716 start.go:246] waiting for cluster config update ...
	I0210 10:59:59.850696    8716 start.go:255] writing updated cluster config ...
	I0210 10:59:59.852666    8716 out.go:201] 
	I0210 10:59:59.870702    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:59:59.870876    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:59:59.878569    8716 out.go:177] * Starting "ha-335100-m02" control-plane node in "ha-335100" cluster
	I0210 10:59:59.881789    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:59:59.881789    8716 cache.go:56] Caching tarball of preloaded images
	I0210 10:59:59.882124    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 10:59:59.882124    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 10:59:59.882124    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:59:59.889690    8716 start.go:360] acquireMachinesLock for ha-335100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 10:59:59.889690    8716 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100-m02"
	I0210 10:59:59.889690    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:59:59.889690    8716 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0210 10:59:59.891887    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 10:59:59.893000    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 10:59:59.893000    8716 client.go:168] LocalClient.Create starting
	I0210 10:59:59.893692    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 10:59:59.893854    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:59:59.893854    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:59:59.894033    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 10:59:59.894230    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:59:59.894230    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:59:59.894230    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:00:03.390533    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:00:03.390533    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:03.391287    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:00:04.825889    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:00:04.826140    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:04.826217    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:00:08.295330    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:00:08.295330    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:08.298237    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:00:08.708888    8716 main.go:141] libmachine: Creating SSH key...
	I0210 11:00:08.835827    8716 main.go:141] libmachine: Creating VM...
	I0210 11:00:08.835827    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:11.513110    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:00:13.161706    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:00:13.161964    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:13.161964    8716 main.go:141] libmachine: Creating VHD
	I0210 11:00:13.161964    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 11:00:16.827086    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C04A6B50-88FA-4BFC-8917-C96CE972A647
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 11:00:16.827086    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:16.827223    8716 main.go:141] libmachine: Writing magic tar header
	I0210 11:00:16.827223    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 11:00:16.840220    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 11:00:19.886494    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:19.886494    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:19.886576    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd' -SizeBytes 20000MB
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 11:00:25.701436    8716 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-335100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 11:00:25.701436    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:25.701875    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100-m02 -DynamicMemoryEnabled $false
	I0210 11:00:27.785939    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:27.785939    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:27.786019    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100-m02 -Count 2
	I0210 11:00:29.835907    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:29.836288    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:29.836288    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\boot2docker.iso'
	I0210 11:00:32.208441    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:32.208441    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:32.208514    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd'
	I0210 11:00:34.690203    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:34.690782    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:34.690782    8716 main.go:141] libmachine: Starting VM...
	I0210 11:00:34.690782    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m02
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:37.534937    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:39.626047    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:39.626584    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:39.626584    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:41.916910    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:41.916910    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:42.918554    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:44.959711    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:44.959711    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:44.960141    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:47.227989    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:47.228891    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:48.229900    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:50.198780    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:50.198849    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:50.198943    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:52.489179    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:52.489179    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:53.490010    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:55.509448    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:55.509448    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:55.509593    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:57.787656    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:57.788718    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:58.789424    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:00.794681    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:00.794681    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:00.794869    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:03.170642    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:03.170642    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:03.171446    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:05.170366    8716 machine.go:93] provisionDockerMachine start ...
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:07.132675    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:07.132737    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:07.132737    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:09.469603    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:09.470123    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:09.474897    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:09.491952    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:09.492033    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:01:09.617282    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:01:09.617346    8716 buildroot.go:166] provisioning hostname "ha-335100-m02"
	I0210 11:01:09.617413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:13.861001    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:13.861001    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:13.865451    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:13.865882    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:13.865882    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100-m02 && echo "ha-335100-m02" | sudo tee /etc/hostname
	I0210 11:01:14.022160    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m02
	
	I0210 11:01:14.022160    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:15.960632    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:15.960632    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:15.961808    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:18.264563    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:18.264563    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:18.269331    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:18.269815    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:18.269873    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:01:18.399387    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:01:18.399387    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:01:18.399387    8716 buildroot.go:174] setting up certificates
	I0210 11:01:18.399387    8716 provision.go:84] configureAuth start
	I0210 11:01:18.400212    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:20.353105    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:20.353482    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:20.353482    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:22.675105    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:22.675202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:22.675202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:26.967797    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:26.967797    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:26.967797    8716 provision.go:143] copyHostCerts
	I0210 11:01:26.967797    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:01:26.967797    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:01:26.967797    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:01:26.968379    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:01:26.969077    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:01:26.969077    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:01:26.969077    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:01:26.969700    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:01:26.970550    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:01:26.970670    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:01:26.970670    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:01:26.970670    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:01:26.971537    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m02 san=[127.0.0.1 172.29.139.212 ha-335100-m02 localhost minikube]
	I0210 11:01:27.041298    8716 provision.go:177] copyRemoteCerts
	I0210 11:01:27.049742    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:01:27.049742    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:31.431100    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:31.431100    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:31.431567    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:01:31.533502    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4837087s)
	I0210 11:01:31.533502    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:01:31.533502    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:01:31.579158    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:01:31.579158    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:01:31.625124    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:01:31.625559    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:01:31.669704    8716 provision.go:87] duration metric: took 13.2701655s to configureAuth
	I0210 11:01:31.669782    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:01:31.670349    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:01:31.670413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:33.626886    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:33.627061    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:33.627061    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:35.947985    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:35.948215    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:35.954043    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:35.954043    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:35.954043    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:01:36.085121    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:01:36.085121    8716 buildroot.go:70] root file system type: tmpfs
	I0210 11:01:36.085329    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:01:36.085431    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:38.029884    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:38.029884    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:38.029957    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:40.334860    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:40.334860    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:40.339753    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:40.340124    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:40.340208    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.136.99"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:01:40.489637    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.136.99
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:01:40.489754    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:42.449900    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:42.450419    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:42.450419    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:44.784904    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:44.784973    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:44.789241    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:44.789724    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:44.789789    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:01:46.970328    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:01:46.970328    8716 machine.go:96] duration metric: took 41.7994851s to provisionDockerMachine
	I0210 11:01:46.970328    8716 client.go:171] duration metric: took 1m47.0761072s to LocalClient.Create
	I0210 11:01:46.970328    8716 start.go:167] duration metric: took 1m47.0761072s to libmachine.API.Create "ha-335100"
	I0210 11:01:46.970328    8716 start.go:293] postStartSetup for "ha-335100-m02" (driver="hyperv")
	I0210 11:01:46.970328    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:01:46.981275    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:01:46.981275    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:48.926581    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:48.926581    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:48.927472    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:51.248947    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:51.249888    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:51.250373    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:01:51.351125    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3697162s)
	I0210 11:01:51.360558    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:01:51.367670    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:01:51.367670    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 11:01:51.367982    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 11:01:51.368535    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 11:01:51.368608    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 11:01:51.376762    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:01:51.394243    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 11:01:51.440418    8716 start.go:296] duration metric: took 4.4700386s for postStartSetup
	I0210 11:01:51.443126    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:55.768896    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:55.768896    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:55.769237    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:01:55.771058    8716 start.go:128] duration metric: took 1m55.8800464s to createHost
	I0210 11:01:55.771058    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:57.700253    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:57.700799    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:57.700891    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:00.039909    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:00.039909    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:00.044236    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:02:00.044645    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:02:00.044645    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:02:00.167124    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185320.168707328
	
	I0210 11:02:00.167227    8716 fix.go:216] guest clock: 1739185320.168707328
	I0210 11:02:00.167227    8716 fix.go:229] Guest: 2025-02-10 11:02:00.168707328 +0000 UTC Remote: 2025-02-10 11:01:55.7710581 +0000 UTC m=+305.887850501 (delta=4.397649228s)
	I0210 11:02:00.167227    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:04.470036    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:04.471102    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:04.478888    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:02:04.479484    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:02:04.479484    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185320
	I0210 11:02:04.617793    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 11:02:00 UTC 2025
	
	I0210 11:02:04.617860    8716 fix.go:236] clock set: Mon Feb 10 11:02:00 UTC 2025
	 (err=<nil>)
	I0210 11:02:04.617860    8716 start.go:83] releasing machines lock for "ha-335100-m02", held for 2m4.7267469s
	I0210 11:02:04.617927    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:06.578392    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:06.579361    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:06.579361    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:08.899139    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:08.899139    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:08.902892    8716 out.go:177] * Found network options:
	I0210 11:02:08.905912    8716 out.go:177]   - NO_PROXY=172.29.136.99
	W0210 11:02:08.907409    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:02:08.910037    8716 out.go:177]   - NO_PROXY=172.29.136.99
	W0210 11:02:08.912141    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:02:08.913286    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:02:08.915397    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 11:02:08.915462    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:08.921393    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:02:08.921393    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:10.859711    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:10.860158    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:10.860158    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:13.273340    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:13.273340    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:13.274577    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:02:13.298712    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:13.298819    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:13.299186    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:02:13.368144    8716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4466995s)
	W0210 11:02:13.368226    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:02:13.376237    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:02:13.381907    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4663934s)
	W0210 11:02:13.381984    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 11:02:13.409246    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:02:13.409246    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:02:13.409246    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:02:13.453127    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 11:02:13.471111    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 11:02:13.471261    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 11:02:13.484197    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:02:13.504321    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:02:13.513799    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:02:13.542123    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:02:13.572213    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:02:13.602594    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:02:13.630093    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:02:13.658122    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:02:13.686231    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:02:13.715695    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:02:13.742863    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:02:13.761467    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:02:13.770510    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:02:13.807411    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:02:13.838963    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:14.021853    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:02:14.056814    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:02:14.065818    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:02:14.099416    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:02:14.127804    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:02:14.164300    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:02:14.195269    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:02:14.227292    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:02:14.287413    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:02:14.310609    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:02:14.354206    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:02:14.368335    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:02:14.384860    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:02:14.424674    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:02:14.622004    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:02:14.810797    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:02:14.810940    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:02:14.850103    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:15.040191    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:02:17.620616    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5803953s)
	I0210 11:02:17.629553    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 11:02:17.661504    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:02:17.692500    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 11:02:17.878833    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 11:02:18.070609    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:18.270940    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 11:02:18.307647    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:02:18.342442    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:18.530245    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 11:02:18.631776    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 11:02:18.640773    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 11:02:18.649619    8716 start.go:563] Will wait 60s for crictl version
	I0210 11:02:18.658384    8716 ssh_runner.go:195] Run: which crictl
	I0210 11:02:18.672692    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:02:18.735821    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 11:02:18.742825    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:02:18.786756    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:02:18.820953    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 11:02:18.824722    8716 out.go:177]   - env NO_PROXY=172.29.136.99
	I0210 11:02:18.826709    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 11:02:18.829696    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 11:02:18.832698    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 11:02:18.832698    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 11:02:18.841020    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 11:02:18.847456    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:02:18.869084    8716 mustload.go:65] Loading cluster: ha-335100
	I0210 11:02:18.869721    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:02:18.870511    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:20.797037    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:20.797037    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:20.797037    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:02:20.797923    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.139.212
	I0210 11:02:20.797997    8716 certs.go:194] generating shared ca certs ...
	I0210 11:02:20.797997    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.798259    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 11:02:20.798875    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 11:02:20.798875    8716 certs.go:256] generating profile certs ...
	I0210 11:02:20.799494    8716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 11:02:20.799593    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5
	I0210 11:02:20.799676    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.139.212 172.29.143.254]
	I0210 11:02:20.958401    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 ...
	I0210 11:02:20.958401    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5: {Name:mk82cfde7602081e3f5ad03699e241ce1d0a9ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.959541    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5 ...
	I0210 11:02:20.960550    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5: {Name:mk13bc1ebe7613f673c88f9bec73e4d38c972417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.960789    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 11:02:20.977980    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 11:02:20.978663    8716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 11:02:20.979210    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 11:02:20.979386    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 11:02:20.979483    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 11:02:20.979758    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 11:02:20.980387    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 11:02:20.980521    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 11:02:20.980521    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 11:02:20.981041    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 11:02:20.981294    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 11:02:20.981484    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 11:02:20.981707    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 11:02:20.982048    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 11:02:20.982233    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:22.925555    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:22.925555    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:22.925636    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:25.302582    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:02:25.302582    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:25.302582    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:02:25.398014    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0210 11:02:25.406219    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0210 11:02:25.433854    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0210 11:02:25.440242    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0210 11:02:25.467133    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0210 11:02:25.474577    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0210 11:02:25.501232    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0210 11:02:25.508498    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0210 11:02:25.537855    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0210 11:02:25.550526    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0210 11:02:25.580257    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0210 11:02:25.587090    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0210 11:02:25.608202    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:02:25.655072    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:02:25.701471    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:02:25.747891    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:02:25.792087    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 11:02:25.837433    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 11:02:25.884923    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:02:25.929037    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:02:25.973387    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:02:26.017365    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 11:02:26.061423    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 11:02:26.106403    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0210 11:02:26.137699    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0210 11:02:26.173512    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0210 11:02:26.204250    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0210 11:02:26.233742    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0210 11:02:26.263263    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0210 11:02:26.297428    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0210 11:02:26.336228    8716 ssh_runner.go:195] Run: openssl version
	I0210 11:02:26.352088    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 11:02:26.380073    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.387017    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.394490    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.415656    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 11:02:26.444704    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 11:02:26.471507    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.479380    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.487756    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.504917    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:02:26.534028    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:02:26.560253    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.567133    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.576071    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.592019    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:02:26.618436    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:02:26.624985    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:02:26.625556    8716 kubeadm.go:934] updating node {m02 172.29.139.212 8443 v1.32.1 docker true true} ...
	I0210 11:02:26.625703    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.139.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:02:26.625740    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 11:02:26.633935    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 11:02:26.660051    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 11:02:26.660051    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0210 11:02:26.669822    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:02:26.684337    8716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0210 11:02:26.693291    8716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0210 11:02:27.770020    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:02:27.780672    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:02:27.787910    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 11:02:27.787910    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0210 11:02:27.872519    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:02:27.880459    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:02:27.917581    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 11:02:27.917581    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0210 11:02:27.967579    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:02:28.036199    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:02:28.043257    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:02:28.066642    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 11:02:28.067749    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0210 11:02:28.966573    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0210 11:02:28.984576    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0210 11:02:29.016695    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:02:29.049043    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0210 11:02:29.098199    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 11:02:29.104801    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:02:29.137742    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:29.343297    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:02:29.370828    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:02:29.371612    8716 start.go:317] joinCluster: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.
29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:02:29.371612    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 11:02:29.371612    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:31.319574    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:31.319574    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:31.320328    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:33.707382    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:02:33.708241    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:33.708637    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:02:34.143761    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7720937s)
	I0210 11:02:34.143761    8716 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:02:34.143761    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nn2t6d.ycdpzx2fx9wduepx --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m02 --control-plane --apiserver-advertise-address=172.29.139.212 --apiserver-bind-port=8443"
	I0210 11:03:12.989922    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nn2t6d.ycdpzx2fx9wduepx --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m02 --control-plane --apiserver-advertise-address=172.29.139.212 --apiserver-bind-port=8443": (38.8457143s)
	I0210 11:03:12.989979    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 11:03:13.797742    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100-m02 minikube.k8s.io/updated_at=2025_02_10T11_03_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=false
	I0210 11:03:14.419316    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-335100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0210 11:03:14.674893    8716 start.go:319] duration metric: took 45.3027603s to joinCluster
	I0210 11:03:14.675091    8716 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:03:14.675640    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:03:14.679021    8716 out.go:177] * Verifying Kubernetes components...
	I0210 11:03:14.689823    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:03:15.042793    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:03:15.081446    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:03:15.082258    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0210 11:03:15.082470    8716 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.143.254:8443 with https://172.29.136.99:8443
	I0210 11:03:15.083662    8716 node_ready.go:35] waiting up to 6m0s for node "ha-335100-m02" to be "Ready" ...
	I0210 11:03:15.083963    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:15.083963    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:15.084014    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:15.084014    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:15.107403    8716 round_trippers.go:581] Response Status: 200 OK in 23 milliseconds
	I0210 11:03:15.584470    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:15.584470    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:15.584541    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:15.584541    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:15.589089    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:16.084871    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:16.084871    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:16.084871    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:16.084871    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:16.090688    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:16.584610    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:16.584610    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:16.584610    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:16.584610    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:16.590672    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:17.085070    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:17.085070    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:17.085143    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:17.085143    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:17.089946    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:17.090174    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:17.584624    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:17.584624    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:17.584624    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:17.584624    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:17.589080    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:18.084201    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:18.084201    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:18.084201    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:18.084201    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:18.089811    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:18.584184    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:18.584184    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:18.584184    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:18.584184    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:18.590444    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:19.083886    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:19.083886    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:19.083886    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:19.083886    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:19.099984    8716 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 11:03:19.100297    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:19.584637    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:19.584801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:19.584801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:19.584801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:19.711918    8716 round_trippers.go:581] Response Status: 200 OK in 127 milliseconds
	I0210 11:03:20.085313    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:20.085361    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:20.085395    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:20.085395    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:20.089093    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:20.586032    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:20.586032    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:20.586032    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:20.586032    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:20.590840    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:21.085138    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:21.085624    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:21.085624    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:21.085624    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:21.090408    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:21.585525    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:21.585525    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:21.585525    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:21.585525    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:21.656428    8716 round_trippers.go:581] Response Status: 200 OK in 70 milliseconds
	I0210 11:03:21.656880    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:22.084469    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:22.084469    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:22.084469    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:22.084469    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:22.089095    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:22.585323    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:22.585323    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:22.585323    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:22.585323    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:22.590518    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:23.084284    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:23.084284    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:23.084284    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:23.084284    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:23.094212    8716 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 11:03:23.584852    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:23.584852    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:23.584852    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:23.584852    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:23.590435    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:24.084090    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:24.084090    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:24.084090    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:24.084090    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:24.089425    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:24.089737    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:24.584379    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:24.584379    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:24.584379    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:24.584379    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:24.590303    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:25.084837    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:25.084837    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:25.084837    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:25.084837    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:25.089869    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:25.584224    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:25.584224    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:25.584224    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:25.584224    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:25.589595    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:26.084769    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:26.084769    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:26.084769    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:26.084769    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:26.090444    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:26.090873    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:26.584841    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:26.584841    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:26.584841    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:26.584841    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:26.590426    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:27.084698    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:27.084767    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:27.084767    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:27.084835    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:27.092015    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:27.584463    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:27.584463    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:27.584463    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:27.584463    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:27.589546    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:28.085081    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:28.085081    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:28.085151    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:28.085151    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:28.091006    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:28.091725    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:28.584923    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:28.585165    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:28.585165    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:28.585165    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:28.590222    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:29.084264    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:29.084264    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:29.084264    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:29.084264    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:29.090629    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:29.585143    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:29.585143    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:29.585143    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:29.585143    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:29.590871    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:30.084150    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:30.084150    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:30.084150    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:30.084150    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:30.089960    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:30.584472    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:30.584472    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:30.584472    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:30.584472    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:30.590820    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:30.591569    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:31.084720    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:31.084720    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:31.084720    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:31.084720    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:31.088998    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:31.584830    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:31.584830    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:31.584830    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:31.584830    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:31.591035    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:32.084956    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:32.084956    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:32.084956    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:32.084956    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:32.091516    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:32.584984    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:32.585060    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:32.585132    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:32.585132    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:32.592617    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:32.592617    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:33.084052    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:33.084052    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:33.084052    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:33.084052    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:33.089228    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:33.584438    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:33.584511    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:33.584511    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:33.584511    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:33.591612    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:34.084640    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:34.084640    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:34.084640    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:34.084640    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:34.090474    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:34.584403    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:34.584403    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:34.584403    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:34.584403    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:34.589402    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:35.084798    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:35.084895    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:35.084895    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:35.084895    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:35.090396    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:35.090869    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:35.584406    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:35.584406    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:35.584596    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:35.584596    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:35.589677    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:36.085217    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:36.085217    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:36.085217    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:36.085217    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:36.090998    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:36.584221    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:36.584221    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:36.584221    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:36.584221    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:36.589519    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:37.085168    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:37.085168    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:37.085168    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:37.085168    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:37.090515    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:37.091113    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:37.585003    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:37.585003    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:37.585003    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:37.585003    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:37.589995    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:38.084726    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:38.084801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:38.084801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:38.084801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:38.089075    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:38.584461    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:38.584461    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:38.584461    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:38.584461    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:38.590574    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.084801    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.084801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.084801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.084801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.090249    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.090580    8716 node_ready.go:49] node "ha-335100-m02" has status "Ready":"True"
	I0210 11:03:39.090580    8716 node_ready.go:38] duration metric: took 24.0065915s for node "ha-335100-m02" to be "Ready" ...
	I0210 11:03:39.090580    8716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:03:39.091190    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:39.091190    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.091190    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.091190    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.096447    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.098700    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.098818    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-gc5gf
	I0210 11:03:39.098818    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.098875    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.098875    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.103088    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.103088    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.103088    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.103088    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.103088    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.108159    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.108604    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.108604    8716 pod_ready.go:82] duration metric: took 9.9038ms for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.108676    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.108749    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-s44gp
	I0210 11:03:39.108749    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.108749    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.108749    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.112585    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:39.113891    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.113891    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.113891    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.113891    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.120011    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.120967    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.120967    8716 pod_ready.go:82] duration metric: took 12.2913ms for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.120967    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.120967    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100
	I0210 11:03:39.120967    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.120967    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.120967    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.125798    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.126485    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.126485    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.126485    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.126485    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.130767    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.130767    8716 pod_ready.go:93] pod "etcd-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.130767    8716 pod_ready.go:82] duration metric: took 9.7989ms for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.130767    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.130767    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m02
	I0210 11:03:39.130767    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.130767    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.130767    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.135057    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.135372    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.135372    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.135433    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.135433    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.137983    8716 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:03:39.139338    8716 pod_ready.go:93] pod "etcd-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.139396    8716 pod_ready.go:82] duration metric: took 8.6298ms for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.139458    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.285315    8716 request.go:661] Waited for 145.8562ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:03:39.285683    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:03:39.285683    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.285683    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.285683    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.290193    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.484978    8716 request.go:661] Waited for 194.396ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.485588    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.485588    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.485588    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.485588    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.491637    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.491928    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.491928    8716 pod_ready.go:82] duration metric: took 352.4668ms for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.491928    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.685597    8716 request.go:661] Waited for 193.5123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:03:39.685928    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:03:39.685928    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.685928    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.685928    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.690280    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.885628    8716 request.go:661] Waited for 193.9281ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.885928    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.885928    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.885928    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.885928    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.890977    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.891582    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.891680    8716 pod_ready.go:82] duration metric: took 399.7465ms for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.891680    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.085293    8716 request.go:661] Waited for 193.6114ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:03:40.085636    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:03:40.085636    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.085636    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.085636    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.090382    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:40.285761    8716 request.go:661] Waited for 194.7964ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:40.285761    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:40.285761    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.286077    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.286077    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.290385    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:40.290955    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:40.290955    8716 pod_ready.go:82] duration metric: took 399.2711ms for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.291196    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.485208    8716 request.go:661] Waited for 193.9559ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:03:40.485607    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:03:40.485607    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.485607    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.485607    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.490181    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:40.685664    8716 request.go:661] Waited for 194.9171ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:40.685664    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:40.685664    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.685664    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.685664    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.691524    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:40.692116    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:40.692116    8716 pod_ready.go:82] duration metric: took 400.9151ms for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.692116    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.886293    8716 request.go:661] Waited for 194.1751ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:03:40.886549    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:03:40.886549    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.886549    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.886549    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.891986    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.085666    8716 request.go:661] Waited for 193.3132ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:41.085946    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:41.085946    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.085946    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.085946    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.092029    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:41.092660    8716 pod_ready.go:93] pod "kube-proxy-b5xnq" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.092719    8716 pod_ready.go:82] duration metric: took 400.5989ms for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.092719    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.285733    8716 request.go:661] Waited for 192.8939ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:03:41.286074    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:03:41.286074    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.286074    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.286074    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.291770    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:41.484880    8716 request.go:661] Waited for 191.9358ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.484880    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.484880    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.484880    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.484880    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.490213    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:41.490658    8716 pod_ready.go:93] pod "kube-proxy-xzs7w" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.490735    8716 pod_ready.go:82] duration metric: took 398.0109ms for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.490735    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.685605    8716 request.go:661] Waited for 194.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:03:41.685605    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:03:41.685605    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.685605    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.685605    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.691383    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.885005    8716 request.go:661] Waited for 193.1647ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.885501    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.885584    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.885601    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.885601    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.889779    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.889779    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.889779    8716 pod_ready.go:82] duration metric: took 399.0394ms for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.889779    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:42.085552    8716 request.go:661] Waited for 195.7716ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:03:42.085552    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:03:42.085552    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.085552    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.086225    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.093061    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:42.285457    8716 request.go:661] Waited for 191.3361ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:42.285457    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:42.285872    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.285872    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.285872    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.291332    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:42.291414    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:42.291414    8716 pod_ready.go:82] duration metric: took 401.6307ms for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:42.291414    8716 pod_ready.go:39] duration metric: took 3.2007969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:03:42.291414    8716 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:03:42.301418    8716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:03:42.331953    8716 api_server.go:72] duration metric: took 27.6565122s to wait for apiserver process to appear ...
	I0210 11:03:42.332032    8716 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:03:42.332072    8716 api_server.go:253] Checking apiserver healthz at https://172.29.136.99:8443/healthz ...
	I0210 11:03:42.347625    8716 api_server.go:279] https://172.29.136.99:8443/healthz returned 200:
	ok
	I0210 11:03:42.347848    8716 round_trippers.go:470] GET https://172.29.136.99:8443/version
	I0210 11:03:42.347848    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.347848    8716 round_trippers.go:480]     Accept: application/json, */*
	I0210 11:03:42.347848    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.349217    8716 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 11:03:42.349217    8716 api_server.go:141] control plane version: v1.32.1
	I0210 11:03:42.349217    8716 api_server.go:131] duration metric: took 17.1849ms to wait for apiserver health ...
	I0210 11:03:42.349217    8716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:03:42.485619    8716 request.go:661] Waited for 136.4009ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.485619    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.485619    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.485619    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.485619    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.492781    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:42.494306    8716 system_pods.go:59] 17 kube-system pods found
	I0210 11:03:42.494399    8716 system_pods.go:61] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:03:42.494463    8716 system_pods.go:74] duration metric: took 145.2447ms to wait for pod list to return data ...
	I0210 11:03:42.494463    8716 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:03:42.685268    8716 request.go:661] Waited for 190.6375ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:03:42.685588    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:03:42.685588    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.685588    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.685746    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.693148    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:42.693310    8716 default_sa.go:45] found service account: "default"
	I0210 11:03:42.693310    8716 default_sa.go:55] duration metric: took 198.7478ms for default service account to be created ...
	I0210 11:03:42.693310    8716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:03:42.885716    8716 request.go:661] Waited for 192.3222ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.885716    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.885716    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.885716    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.885716    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.891100    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:42.893642    8716 system_pods.go:86] 17 kube-system pods found
	I0210 11:03:42.893723    8716 system_pods.go:89] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:03:42.893831    8716 system_pods.go:126] duration metric: took 200.5191ms to wait for k8s-apps to be running ...
	I0210 11:03:42.893831    8716 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:03:42.904073    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:03:42.928445    8716 system_svc.go:56] duration metric: took 34.6131ms WaitForService to wait for kubelet
	I0210 11:03:42.929240    8716 kubeadm.go:582] duration metric: took 28.2538242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:03:42.929240    8716 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:03:43.085839    8716 request.go:661] Waited for 156.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes
	I0210 11:03:43.085839    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes
	I0210 11:03:43.085839    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:43.085839    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:43.085839    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:43.092879    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:43.093161    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:03:43.093161    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:03:43.093161    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:03:43.093161    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:03:43.093161    8716 node_conditions.go:105] duration metric: took 163.9186ms to run NodePressure ...
	I0210 11:03:43.093161    8716 start.go:241] waiting for startup goroutines ...
	I0210 11:03:43.093161    8716 start.go:255] writing updated cluster config ...
	I0210 11:03:43.097753    8716 out.go:201] 
	I0210 11:03:43.119470    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:03:43.119701    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:03:43.127592    8716 out.go:177] * Starting "ha-335100-m03" control-plane node in "ha-335100" cluster
	I0210 11:03:43.130136    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:03:43.130136    8716 cache.go:56] Caching tarball of preloaded images
	I0210 11:03:43.130586    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:03:43.130779    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:03:43.130779    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:03:43.138939    8716 start.go:360] acquireMachinesLock for ha-335100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:03:43.140035    8716 start.go:364] duration metric: took 90.8µs to acquireMachinesLock for "ha-335100-m03"
	I0210 11:03:43.140035    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false is
tio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:03:43.140035    8716 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0210 11:03:43.143184    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:03:43.143184    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 11:03:43.144154    8716 client.go:168] LocalClient.Create starting
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Parsing certificate...
	I0210 11:03:43.144849    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 11:03:43.145035    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 11:03:43.145035    8716 main.go:141] libmachine: Parsing certificate...
	I0210 11:03:43.145128    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:03:46.568133    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:03:46.568133    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:46.569180    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:03:47.936380    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:03:47.936380    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:47.936454    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:03:51.319811    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:03:51.319916    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:51.322246    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:03:51.674809    8716 main.go:141] libmachine: Creating SSH key...
	I0210 11:03:51.901304    8716 main.go:141] libmachine: Creating VM...
	I0210 11:03:51.901304    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:03:54.560734    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:03:54.561327    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:54.561327    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:03:54.561415    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:03:56.194036    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:03:56.194694    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:56.194694    8716 main.go:141] libmachine: Creating VHD
	I0210 11:03:56.194804    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 11:03:59.795854    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 07AB0531-FB35-431D-AEFA-A089C6C41C27
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 11:03:59.795890    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:59.795890    8716 main.go:141] libmachine: Writing magic tar header
	I0210 11:03:59.795960    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 11:03:59.810413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 11:04:02.827683    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:02.827863    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:02.828093    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd' -SizeBytes 20000MB
	I0210 11:04:05.264459    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:05.265086    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:05.265086    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 11:04:08.622338    8716 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-335100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 11:04:08.622597    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:08.622674    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100-m03 -DynamicMemoryEnabled $false
	I0210 11:04:10.686115    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:10.686189    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:10.686266    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100-m03 -Count 2
	I0210 11:04:12.679684    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:12.679684    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:12.679772    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\boot2docker.iso'
	I0210 11:04:15.038737    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:15.038794    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:15.038794    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd'
	I0210 11:04:17.434704    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:17.434704    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:17.434704    8716 main.go:141] libmachine: Starting VM...
	I0210 11:04:17.435597    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m03
	I0210 11:04:20.310680    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:20.310721    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:20.310761    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 11:04:20.310761    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:22.396988    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:22.397156    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:22.397156    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:24.722624    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:24.722624    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:25.723405    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:27.715206    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:27.715452    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:27.715452    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:30.022842    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:30.022842    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:31.024639    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:33.005824    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:33.006707    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:33.006776    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:35.319099    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:35.319099    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:36.320043    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:38.318308    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:38.319180    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:38.319180    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:40.588367    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:40.588367    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:41.588928    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:47.981992    8716 machine.go:93] provisionDockerMachine start ...
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:52.279064    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:52.279064    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:52.282781    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:04:52.299123    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:04:52.299123    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:04:52.431056    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:04:52.431056    8716 buildroot.go:166] provisioning hostname "ha-335100-m03"
	I0210 11:04:52.431139    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:54.438032    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:54.438980    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:54.439058    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:56.766560    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:56.766560    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:56.770620    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:04:56.770698    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:04:56.770698    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100-m03 && echo "ha-335100-m03" | sudo tee /etc/hostname
	I0210 11:04:56.927910    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m03
	
	I0210 11:04:56.928037    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:58.864957    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:58.865200    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:58.865200    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:01.203483    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:01.203483    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:01.208201    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:01.208867    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:01.208867    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:05:01.359193    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:05:01.359193    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:05:01.359193    8716 buildroot.go:174] setting up certificates
	I0210 11:05:01.359193    8716 provision.go:84] configureAuth start
	I0210 11:05:01.359193    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:03.299782    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:03.300847    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:03.300932    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:05.640929    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:05.640929    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:05.641348    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:07.575558    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:07.575658    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:07.575658    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:09.901808    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:09.901808    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:09.902449    8716 provision.go:143] copyHostCerts
	I0210 11:05:09.902449    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:05:09.902449    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:05:09.902449    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:05:09.903111    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:05:09.903709    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:05:09.903709    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:05:09.903709    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:05:09.904359    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:05:09.904963    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:05:09.904963    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:05:09.904963    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:05:09.905545    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:05:09.906611    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m03 san=[127.0.0.1 172.29.143.243 ha-335100-m03 localhost minikube]
	I0210 11:05:10.055618    8716 provision.go:177] copyRemoteCerts
	I0210 11:05:10.063620    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:05:10.063620    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:14.330807    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:14.331010    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:14.331339    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:14.442012    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3783417s)
	I0210 11:05:14.442012    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:05:14.442012    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:05:14.490385    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:05:14.490916    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:05:14.536774    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:05:14.537491    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:05:14.582310    8716 provision.go:87] duration metric: took 13.2229641s to configureAuth
	I0210 11:05:14.582407    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:05:14.582665    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:05:14.582665    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:16.545154    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:16.545652    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:16.545712    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:18.888903    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:18.889185    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:18.892663    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:18.893236    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:18.893236    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:05:19.027685    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:05:19.027685    8716 buildroot.go:70] root file system type: tmpfs
	I0210 11:05:19.027856    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:05:19.027856    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:20.993476    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:20.993476    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:20.994359    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:23.317525    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:23.317525    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:23.321542    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:23.321847    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:23.321847    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.136.99"
	Environment="NO_PROXY=172.29.136.99,172.29.139.212"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:05:23.478779    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.136.99
	Environment=NO_PROXY=172.29.136.99,172.29.139.212
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:05:23.478802    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:25.454405    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:25.454405    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:25.455342    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:27.813915    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:27.813992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:27.818580    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:27.818746    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:27.818746    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:05:30.072003    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:05:30.072097    8716 machine.go:96] duration metric: took 42.0896173s to provisionDockerMachine
	I0210 11:05:30.072132    8716 client.go:171] duration metric: took 1m46.9267382s to LocalClient.Create
	I0210 11:05:30.072132    8716 start.go:167] duration metric: took 1m46.9277075s to libmachine.API.Create "ha-335100"
	I0210 11:05:30.072132    8716 start.go:293] postStartSetup for "ha-335100-m03" (driver="hyperv")
	I0210 11:05:30.072184    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:05:30.079843    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:05:30.080796    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:34.489245    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:34.489245    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:34.489674    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:34.589613    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5086975s)
	I0210 11:05:34.597790    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:05:34.605803    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:05:34.605803    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 11:05:34.606434    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 11:05:34.606641    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 11:05:34.606641    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 11:05:34.615388    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:05:34.634222    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 11:05:34.679413    8716 start.go:296] duration metric: took 4.6071798s for postStartSetup
	I0210 11:05:34.682026    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:36.722940    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:36.722940    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:36.723033    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:39.118179    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:39.119219    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:39.119466    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:05:39.121334    8716 start.go:128] duration metric: took 1m55.9799541s to createHost
	I0210 11:05:39.121373    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:41.093774    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:41.094210    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:41.094286    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:43.506546    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:43.506639    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:43.510909    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:43.511544    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:43.511544    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:05:43.650521    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185543.660188214
	
	I0210 11:05:43.650627    8716 fix.go:216] guest clock: 1739185543.660188214
	I0210 11:05:43.650627    8716 fix.go:229] Guest: 2025-02-10 11:05:43.660188214 +0000 UTC Remote: 2025-02-10 11:05:39.1213738 +0000 UTC m=+529.235586001 (delta=4.538814414s)
	I0210 11:05:43.650728    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:45.651869    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:45.651869    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:45.651967    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:48.044992    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:48.044992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:48.049097    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:48.049206    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:48.049206    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185543
	I0210 11:05:48.187998    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 11:05:43 UTC 2025
	
	I0210 11:05:48.188088    8716 fix.go:236] clock set: Mon Feb 10 11:05:43 UTC 2025
	 (err=<nil>)
	I0210 11:05:48.188088    8716 start.go:83] releasing machines lock for "ha-335100-m03", held for 2m5.0466027s
	I0210 11:05:48.188307    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:50.208144    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:50.208144    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:50.208684    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:52.625512    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:52.625563    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:52.630916    8716 out.go:177] * Found network options:
	I0210 11:05:52.633885    8716 out.go:177]   - NO_PROXY=172.29.136.99,172.29.139.212
	W0210 11:05:52.635523    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.635523    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:05:52.638016    8716 out.go:177]   - NO_PROXY=172.29.136.99,172.29.139.212
	W0210 11:05:52.640562    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.640562    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.641902    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.641926    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:05:52.643744    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 11:05:52.643902    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:52.650137    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:05:52.650137    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:54.687880    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:54.688117    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:54.688170    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:57.135405    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:57.135405    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:57.135405    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:57.159899    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:57.160900    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:57.161310    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:57.227139    8716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5769489s)
	W0210 11:05:57.227240    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:05:57.237731    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:05:57.242387    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5985167s)
	W0210 11:05:57.242387    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 11:05:57.268555    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:05:57.268555    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:05:57.268873    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:05:57.311690    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 11:05:57.335273    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 11:05:57.335273    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 11:05:57.341607    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:05:57.360753    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:05:57.369626    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:05:57.398873    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:05:57.430454    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:05:57.458700    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:05:57.488502    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:05:57.518244    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:05:57.547695    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:05:57.576022    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:05:57.604557    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:05:57.623307    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:05:57.631729    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:05:57.662800    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:05:57.686758    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:05:57.886483    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:05:57.920422    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:05:57.928978    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:05:57.959794    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:05:57.992787    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:05:58.027475    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:05:58.060140    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:05:58.095142    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:05:58.154828    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:05:58.179253    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:05:58.222703    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:05:58.236504    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:05:58.254815    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:05:58.294230    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:05:58.484956    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:05:58.667686    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:05:58.667795    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:05:58.707811    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:05:58.892338    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:06:01.499123    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6067548s)
	I0210 11:06:01.508793    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 11:06:01.544137    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:06:01.579742    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 11:06:01.770693    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 11:06:01.954692    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:02.155743    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 11:06:02.194749    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:06:02.231375    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:02.428207    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 11:06:02.537905    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 11:06:02.546560    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 11:06:02.555326    8716 start.go:563] Will wait 60s for crictl version
	I0210 11:06:02.563467    8716 ssh_runner.go:195] Run: which crictl
	I0210 11:06:02.578158    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:06:02.632843    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 11:06:02.640406    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:06:02.682318    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:06:02.721023    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 11:06:02.725667    8716 out.go:177]   - env NO_PROXY=172.29.136.99
	I0210 11:06:02.728570    8716 out.go:177]   - env NO_PROXY=172.29.136.99,172.29.139.212
	I0210 11:06:02.730515    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 11:06:02.737429    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 11:06:02.737429    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 11:06:02.745849    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 11:06:02.753148    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:06:02.776320    8716 mustload.go:65] Loading cluster: ha-335100
	I0210 11:06:02.777162    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:06:02.777829    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:04.764023    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:04.765029    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:04.765029    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:06:04.765632    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.143.243
	I0210 11:06:04.765632    8716 certs.go:194] generating shared ca certs ...
	I0210 11:06:04.765707    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.765707    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 11:06:04.766647    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 11:06:04.766702    8716 certs.go:256] generating profile certs ...
	I0210 11:06:04.766702    8716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 11:06:04.767225    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c
	I0210 11:06:04.767361    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.139.212 172.29.143.243 172.29.143.254]
	I0210 11:06:04.976664    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c ...
	I0210 11:06:04.976664    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c: {Name:mk9ba5b24f65192acbccdfb2285fadb10bd76c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.978001    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c ...
	I0210 11:06:04.978001    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c: {Name:mkb5491b0832431dace075b26866783b7e681dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.979517    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 11:06:04.997446    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 11:06:04.998447    8716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 11:06:04.998447    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 11:06:04.999880    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 11:06:05.000134    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 11:06:05.000134    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 11:06:05.000755    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 11:06:05.000866    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 11:06:05.000920    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 11:06:05.000920    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 11:06:05.001544    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 11:06:05.001544    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 11:06:05.002203    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:07.062589    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:07.062873    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:07.062954    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:06:09.479798    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:06:09.480976    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:09.481217    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:06:09.593251    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0210 11:06:09.601538    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0210 11:06:09.633745    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0210 11:06:09.643962    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0210 11:06:09.673831    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0210 11:06:09.684671    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0210 11:06:09.717023    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0210 11:06:09.724801    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0210 11:06:09.757569    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0210 11:06:09.765160    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0210 11:06:09.793987    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0210 11:06:09.801900    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0210 11:06:09.822876    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:06:09.869554    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:06:09.915239    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:06:09.962387    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:06:10.007395    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0210 11:06:10.057322    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:06:10.106649    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:06:10.153994    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:06:10.202362    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:06:10.250620    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 11:06:10.295441    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 11:06:10.344584    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0210 11:06:10.377868    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0210 11:06:10.408522    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0210 11:06:10.439058    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0210 11:06:10.471521    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0210 11:06:10.502174    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0210 11:06:10.534428    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0210 11:06:10.578707    8716 ssh_runner.go:195] Run: openssl version
	I0210 11:06:10.596105    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:06:10.626948    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.634388    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.643570    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.660544    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:06:10.689750    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 11:06:10.719352    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.727191    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.735632    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.757665    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 11:06:10.786268    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 11:06:10.817001    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.825441    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.834179    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.852526    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:06:10.881393    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:06:10.888118    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:06:10.888884    8716 kubeadm.go:934] updating node {m03 172.29.143.243 8443 v1.32.1 docker true true} ...
	I0210 11:06:10.888884    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.143.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:06:10.888884    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 11:06:10.896542    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 11:06:10.925809    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 11:06:10.925907    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0210 11:06:10.934552    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:06:10.950456    8716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0210 11:06:10.958757    8716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0210 11:06:10.979710    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0210 11:06:10.979753    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0210 11:06:10.979753    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0210 11:06:10.979753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:06:10.979753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:06:10.991260    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:06:10.991446    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:06:10.991446    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:06:11.014019    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 11:06:11.014019    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 11:06:11.014019    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:06:11.014019    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0210 11:06:11.014019    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0210 11:06:11.022380    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:06:11.074693    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 11:06:11.075271    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0210 11:06:12.178493    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0210 11:06:12.197347    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0210 11:06:12.233885    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:06:12.264780    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0210 11:06:12.305502    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 11:06:12.312654    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:06:12.343296    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:12.541037    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:06:12.574076    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:06:12.574783    8716 start.go:317] joinCluster: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.
29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:06:12.574783    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 11:06:12.574783    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:06:17.087838    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:06:17.088198    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:17.088262    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:06:17.288068    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7129136s)
	I0210 11:06:17.288160    8716 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:06:17.288252    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2z2zrn.nkcfcdek82976009 --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m03 --control-plane --apiserver-advertise-address=172.29.143.243 --apiserver-bind-port=8443"
	I0210 11:06:58.578334    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2z2zrn.nkcfcdek82976009 --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m03 --control-plane --apiserver-advertise-address=172.29.143.243 --apiserver-bind-port=8443": (41.2896023s)
	I0210 11:06:58.579779    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 11:06:59.264069    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100-m03 minikube.k8s.io/updated_at=2025_02_10T11_06_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=false
	I0210 11:06:59.416137    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-335100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0210 11:06:59.597705    8716 start.go:319] duration metric: took 47.0223761s to joinCluster
	I0210 11:06:59.598136    8716 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:06:59.598687    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:06:59.622819    8716 out.go:177] * Verifying Kubernetes components...
	I0210 11:06:59.638372    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:59.964902    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:06:59.993297    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:06:59.993879    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0210 11:06:59.993879    8716 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.143.254:8443 with https://172.29.136.99:8443
	I0210 11:06:59.994766    8716 node_ready.go:35] waiting up to 6m0s for node "ha-335100-m03" to be "Ready" ...
	I0210 11:06:59.994991    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:06:59.994991    8716 round_trippers.go:476] Request Headers:
	I0210 11:06:59.994991    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:06:59.995073    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:00.010316    8716 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0210 11:07:00.495925    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:00.495925    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:00.495925    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:00.495925    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:00.501392    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:00.995607    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:00.995607    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:00.995607    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:00.995607    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:01.009859    8716 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 11:07:01.495982    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:01.495982    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:01.495982    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:01.495982    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:01.501850    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:01.995551    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:01.995551    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:01.995551    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:01.995551    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.008887    8716 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 11:07:02.009138    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:02.495647    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:02.495647    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:02.495647    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:02.495647    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.500655    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:02.995202    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:02.995202    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:02.995202    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.995202    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:03.026092    8716 round_trippers.go:581] Response Status: 200 OK in 30 milliseconds
	I0210 11:07:03.496347    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:03.496347    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:03.496347    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:03.496347    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:03.501974    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:03.995145    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:03.995145    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:03.995145    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:03.995145    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.005153    8716 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0210 11:07:04.495809    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:04.495809    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:04.495877    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.495877    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:04.501532    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:04.501532    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:04.996633    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:04.996751    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:04.996751    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.996751    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:05.002269    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:05.495489    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:05.495489    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:05.495489    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:05.495489    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:05.610530    8716 round_trippers.go:581] Response Status: 200 OK in 115 milliseconds
	I0210 11:07:05.995197    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:05.995197    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:05.995197    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:05.995197    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:06.000705    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:06.495338    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:06.495338    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:06.495338    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:06.495338    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:06.500646    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:06.995985    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:06.996047    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:06.996107    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:06.996107    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:07.002234    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:07.002596    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:07.496304    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:07.496304    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:07.496304    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:07.496304    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:07.500878    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:07.996077    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:07.996157    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:07.996157    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:07.996157    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:08.001017    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:08.495926    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:08.495926    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:08.495926    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:08.495926    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:08.501081    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:08.995324    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:08.995324    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:08.995324    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:08.995324    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:09.000292    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:09.495596    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:09.496014    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:09.496014    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:09.496014    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:09.501585    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:09.501981    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:09.996018    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:09.996018    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:09.996018    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:09.996018    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:10.001430    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:10.495766    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:10.495766    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:10.495766    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:10.495766    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:10.501091    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:10.995022    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:10.995022    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:10.995022    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:10.995022    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:11.000277    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:11.495511    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:11.495511    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:11.495511    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:11.495511    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:11.501446    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:11.997309    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:11.997309    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:11.997309    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:11.997309    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:12.003132    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:12.003433    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:12.496399    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:12.496399    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:12.496470    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:12.496470    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:12.501424    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:12.995715    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:12.995715    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:12.995715    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:12.995715    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:13.001276    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:13.495516    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:13.495516    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:13.495586    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:13.495586    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:13.506956    8716 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 11:07:13.995458    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:13.995458    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:13.995458    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:13.995458    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:14.001513    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:14.496067    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:14.496067    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:14.496067    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:14.496067    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:14.501677    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:14.501677    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:14.996471    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:14.996471    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:14.996471    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:14.996471    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:15.001790    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:15.496341    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:15.496341    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:15.496416    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:15.496416    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:15.501201    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:15.996278    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:15.996354    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:15.996354    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:15.996354    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.002237    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:16.496696    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:16.496696    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:16.496696    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:16.496696    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.505315    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:16.505315    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:16.996001    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:16.996001    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:16.996001    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.996001    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.001502    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:17.496005    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:17.496005    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:17.496005    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.496005    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:17.501103    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:17.995645    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:17.995645    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:17.995645    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.995645    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:18.000757    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:18.496080    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:18.496482    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:18.496482    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:18.496553    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:18.501160    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:18.996617    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:18.996617    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:18.996617    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:18.996617    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:19.000935    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:19.000935    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:19.497592    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:19.497592    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:19.497592    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:19.497592    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:19.503678    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:19.996355    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:19.996355    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:19.996424    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:19.996424    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.003488    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:20.496007    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:20.496007    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:20.496007    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:20.496007    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.501352    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:20.995396    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:20.995826    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:20.995826    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.995826    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.001056    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:21.006143    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:21.496840    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:21.496840    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:21.496924    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.496924    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:21.503078    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:21.996375    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:21.996375    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:21.996375    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.996375    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:22.002139    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:22.496215    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:22.496302    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:22.496302    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:22.496302    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:22.501730    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:22.995803    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:22.995803    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:22.995803    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:22.995803    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.000911    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.001794    8716 node_ready.go:49] node "ha-335100-m03" has status "Ready":"True"
	I0210 11:07:23.001861    8716 node_ready.go:38] duration metric: took 23.0067619s for node "ha-335100-m03" to be "Ready" ...
	I0210 11:07:23.001861    8716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:07:23.001944    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:23.001944    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.001944    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.001944    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.008059    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:23.010098    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.010274    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-gc5gf
	I0210 11:07:23.010328    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.010328    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.010328    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.015070    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:23.015691    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.015721    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.015721    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.015721    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.019447    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.020298    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.020325    8716 pod_ready.go:82] duration metric: took 10.1675ms for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.020325    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.020325    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-s44gp
	I0210 11:07:23.020325    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.020325    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.020325    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.023967    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.024790    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.024834    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.024834    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.024864    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.028619    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.028619    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.028619    8716 pod_ready.go:82] duration metric: took 8.2944ms for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.028619    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.028619    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100
	I0210 11:07:23.028619    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.028619    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.028619    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.032713    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.034447    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.034541    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.034541    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.034541    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.039204    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:23.039827    8716 pod_ready.go:93] pod "etcd-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.039827    8716 pod_ready.go:82] duration metric: took 11.2076ms for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.039827    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.039827    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m02
	I0210 11:07:23.039827    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.039827    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.039827    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.065385    8716 round_trippers.go:581] Response Status: 200 OK in 25 milliseconds
	I0210 11:07:23.065547    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:23.065547    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.065547    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.065547    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.071463    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.072305    8716 pod_ready.go:93] pod "etcd-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.072378    8716 pod_ready.go:82] duration metric: took 32.5504ms for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.072378    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.196129    8716 request.go:661] Waited for 123.6666ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m03
	I0210 11:07:23.196129    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m03
	I0210 11:07:23.196129    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.196129    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.196129    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.201858    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.396650    8716 request.go:661] Waited for 193.7727ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:23.396650    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:23.396650    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.396650    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.396650    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.402347    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.402911    8716 pod_ready.go:93] pod "etcd-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.402982    8716 pod_ready.go:82] duration metric: took 330.6006ms for pod "etcd-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.402982    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.596531    8716 request.go:661] Waited for 193.4118ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:07:23.596981    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:07:23.597052    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.597052    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.597052    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.602918    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.796109    8716 request.go:661] Waited for 192.27ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.796575    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.796654    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.796654    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.796654    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.802257    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.803008    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.803008    8716 pod_ready.go:82] duration metric: took 400.0211ms for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.803008    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.995899    8716 request.go:661] Waited for 192.7914ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:07:23.995899    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:07:23.995899    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.995899    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.995899    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.004105    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:24.196286    8716 request.go:661] Waited for 191.756ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:24.196286    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:24.196286    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.196286    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.196286    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.202219    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:24.202522    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:24.202522    8716 pod_ready.go:82] duration metric: took 399.4114ms for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.202522    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.396676    8716 request.go:661] Waited for 194.05ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m03
	I0210 11:07:24.397153    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m03
	I0210 11:07:24.397153    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.397153    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.397153    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.414380    8716 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 11:07:24.596715    8716 request.go:661] Waited for 182.1205ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:24.596715    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:24.596715    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.596715    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.596715    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.602619    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:24.602938    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:24.603052    8716 pod_ready.go:82] duration metric: took 400.5261ms for pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.603052    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.796423    8716 request.go:661] Waited for 193.2651ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:07:24.796423    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:07:24.796423    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.796423    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.796423    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.802443    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:24.996691    8716 request.go:661] Waited for 193.2589ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:24.996691    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:24.996691    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.996691    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.996691    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.002489    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:25.002489    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.002489    8716 pod_ready.go:82] duration metric: took 399.432ms for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.002489    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.195910    8716 request.go:661] Waited for 193.4189ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:07:25.196250    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:07:25.196393    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.196420    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.196420    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.202596    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:25.396083    8716 request.go:661] Waited for 192.3614ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:25.396404    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:25.396404    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.396404    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.396404    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.402058    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:25.402458    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.402522    8716 pod_ready.go:82] duration metric: took 399.9644ms for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.402522    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.595984    8716 request.go:661] Waited for 193.3413ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m03
	I0210 11:07:25.596427    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m03
	I0210 11:07:25.596493    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.596493    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.596493    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.601751    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:25.796976    8716 request.go:661] Waited for 195.2228ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:25.796976    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:25.796976    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.796976    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.796976    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.804074    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:25.804877    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.804979    8716 pod_ready.go:82] duration metric: took 402.4533ms for pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.804979    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.996125    8716 request.go:661] Waited for 191.0305ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:07:25.996125    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:07:25.996125    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.996125    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.996125    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.001647    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:26.195951    8716 request.go:661] Waited for 193.482ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:26.196400    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:26.196400    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.196400    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.196400    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.201307    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:26.201612    8716 pod_ready.go:93] pod "kube-proxy-b5xnq" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:26.201770    8716 pod_ready.go:82] duration metric: took 396.7155ms for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.201770    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9g27" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.396766    8716 request.go:661] Waited for 194.7896ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9g27
	I0210 11:07:26.396766    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9g27
	I0210 11:07:26.396766    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.396766    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.396766    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.403095    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:26.597069    8716 request.go:661] Waited for 193.8082ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:26.597349    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:26.597349    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.597349    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.597349    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.605574    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:26.605911    8716 pod_ready.go:93] pod "kube-proxy-b9g27" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:26.606064    8716 pod_ready.go:82] duration metric: took 404.2891ms for pod "kube-proxy-b9g27" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.606064    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.795958    8716 request.go:661] Waited for 189.676ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:07:26.796192    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:07:26.796192    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.796192    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.796192    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.804013    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:26.996512    8716 request.go:661] Waited for 191.5085ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:26.996859    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:26.996859    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.996859    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.996859    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.002467    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.003128    8716 pod_ready.go:93] pod "kube-proxy-xzs7w" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.003128    8716 pod_ready.go:82] duration metric: took 397.0596ms for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.003128    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.196642    8716 request.go:661] Waited for 193.3443ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:07:27.196642    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:07:27.196642    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.196642    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.196642    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.203093    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.397011    8716 request.go:661] Waited for 193.5149ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:27.397236    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:27.397236    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.397236    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.397236    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.402240    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.402240    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.402240    8716 pod_ready.go:82] duration metric: took 399.0114ms for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.402240    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.596401    8716 request.go:661] Waited for 193.6151ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:07:27.596401    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:07:27.596401    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.596401    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.596401    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.601772    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:27.797172    8716 request.go:661] Waited for 195.334ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:27.797172    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:27.797172    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.797172    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.797172    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.803289    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:27.803612    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.803612    8716 pod_ready.go:82] duration metric: took 401.3673ms for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.803762    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.996417    8716 request.go:661] Waited for 192.653ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m03
	I0210 11:07:27.996417    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m03
	I0210 11:07:27.996417    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.996417    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.996417    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.002489    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.195920    8716 request.go:661] Waited for 192.5439ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:28.195920    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:28.195920    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.196264    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.196264    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.201136    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:28.202162    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:28.202245    8716 pod_ready.go:82] duration metric: took 398.4788ms for pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:28.202245    8716 pod_ready.go:39] duration metric: took 5.2003237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:07:28.202319    8716 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:07:28.210177    8716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:07:28.235447    8716 api_server.go:72] duration metric: took 28.6369309s to wait for apiserver process to appear ...
	I0210 11:07:28.235447    8716 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:07:28.235566    8716 api_server.go:253] Checking apiserver healthz at https://172.29.136.99:8443/healthz ...
	I0210 11:07:28.246305    8716 api_server.go:279] https://172.29.136.99:8443/healthz returned 200:
	ok
	I0210 11:07:28.246305    8716 round_trippers.go:470] GET https://172.29.136.99:8443/version
	I0210 11:07:28.246305    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.246305    8716 round_trippers.go:480]     Accept: application/json, */*
	I0210 11:07:28.246305    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.247969    8716 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 11:07:28.247969    8716 api_server.go:141] control plane version: v1.32.1
	I0210 11:07:28.247969    8716 api_server.go:131] duration metric: took 12.4024ms to wait for apiserver health ...
	I0210 11:07:28.247969    8716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:07:28.396724    8716 request.go:661] Waited for 148.7531ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.397026    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.397026    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.397026    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.397026    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.403068    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.405177    8716 system_pods.go:59] 24 kube-system pods found
	I0210 11:07:28.405177    8716 system_pods.go:61] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100-m03" [86de14e3-89f9-4408-94b1-3881bddea6d4] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-lc7hv" [499e3fe2-6d2a-4e55-bc84-153216c1896b] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100-m03" [61432db2-f474-42cb-b1a2-fd460d25d68d] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m03" [7d4b5c47-5a71-44e3-9c45-aec1c1884fd3] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-proxy-b9g27" [b7e5d47d-6677-4d8c-ae0c-b1659c589609] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100-m03" [92efc5a4-0a3e-48db-95fb-ec22c16729f3] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100-m03" [0bf9308c-c321-45b3-930b-0129922cc7a5] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:07:28.405519    8716 system_pods.go:74] duration metric: took 157.5481ms to wait for pod list to return data ...
	I0210 11:07:28.405584    8716 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:07:28.595940    8716 request.go:661] Waited for 190.2805ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:07:28.596274    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:07:28.596274    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.596274    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.596274    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.602598    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.602757    8716 default_sa.go:45] found service account: "default"
	I0210 11:07:28.602757    8716 default_sa.go:55] duration metric: took 197.171ms for default service account to be created ...
	I0210 11:07:28.602825    8716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:07:28.796839    8716 request.go:661] Waited for 194.0112ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.797151    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.797151    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.797151    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.797151    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.803742    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.807080    8716 system_pods.go:86] 24 kube-system pods found
	I0210 11:07:28.807080    8716 system_pods.go:89] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100-m03" [86de14e3-89f9-4408-94b1-3881bddea6d4] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-lc7hv" [499e3fe2-6d2a-4e55-bc84-153216c1896b] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-apiserver-ha-335100-m03" [61432db2-f474-42cb-b1a2-fd460d25d68d] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m03" [7d4b5c47-5a71-44e3-9c45-aec1c1884fd3] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-b9g27" [b7e5d47d-6677-4d8c-ae0c-b1659c589609] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100-m03" [92efc5a4-0a3e-48db-95fb-ec22c16729f3] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:07:28.807317    8716 system_pods.go:89] "kube-vip-ha-335100-m03" [0bf9308c-c321-45b3-930b-0129922cc7a5] Running
	I0210 11:07:28.807317    8716 system_pods.go:89] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:07:28.807317    8716 system_pods.go:126] duration metric: took 204.4894ms to wait for k8s-apps to be running ...
	I0210 11:07:28.807317    8716 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:07:28.814985    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:07:28.841445    8716 system_svc.go:56] duration metric: took 34.1276ms WaitForService to wait for kubelet
	I0210 11:07:28.841445    8716 kubeadm.go:582] duration metric: took 29.2429222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:07:28.841445    8716 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:07:28.996882    8716 request.go:661] Waited for 155.4357ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes
	I0210 11:07:28.996882    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes
	I0210 11:07:28.996882    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.996882    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.996882    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:29.003974    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:29.004474    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004533    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004533    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004533    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004533    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004597    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004597    8716 node_conditions.go:105] duration metric: took 163.1497ms to run NodePressure ...
	I0210 11:07:29.004597    8716 start.go:241] waiting for startup goroutines ...
	I0210 11:07:29.004663    8716 start.go:255] writing updated cluster config ...
	I0210 11:07:29.013641    8716 ssh_runner.go:195] Run: rm -f paused
	I0210 11:07:29.150620    8716 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 11:07:29.157621    8716 out.go:177] * Done! kubectl is now configured to use "ha-335100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.070149416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.085305335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.085452736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.085540336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.085792138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:00:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/175d7cc8c7a0e449c14470ef9a388181ea6c727bed7e20ace4912a27116b52cc/resolv.conf as [nameserver 172.29.128.1]"
	Feb 10 11:00:16 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:00:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df6e532050a6d1c5739b6c55f1b50e2f63e5c94856b3b467b7e2c0ee609a672f/resolv.conf as [nameserver 172.29.128.1]"
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.720755784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721004486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721083387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721468791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730166580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730603384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730644485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.731087789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859662395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859744397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859757797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859908100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:05 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:08:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f44cef7fce909445d3740b68b1d8a594c199ae7ab48880497e2640bc09f9ede6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 10 11:08:06 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:08:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.735999365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736067565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736081465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736183966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dd08a9f3cc944       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   f44cef7fce909       busybox-58667487b6-5px7z
	e7ca26bd041b3       c69fa2e9cbf5f                                                                                         8 minutes ago        Running             coredns                   0                   df6e532050a6d       coredns-668d6bf9bc-s44gp
	4fd9a115fcdaa       c69fa2e9cbf5f                                                                                         8 minutes ago        Running             coredns                   0                   175d7cc8c7a0e       coredns-668d6bf9bc-gc5gf
	0932284881cdb       6e38f40d628db                                                                                         8 minutes ago        Running             storage-provisioner       0                   5f14e7cec489a       storage-provisioner
	22d0df1da0c61       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              9 minutes ago        Running             kindnet-cni               0                   c77ff26256f03       kindnet-hpmm5
	f1c5561320957       e29f9c7391fd9                                                                                         9 minutes ago        Running             kube-proxy                0                   32145bbdfaf77       kube-proxy-xzs7w
	826b316789d5d       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     9 minutes ago        Running             kube-vip                  0                   2767bce183d0e       kube-vip-ha-335100
	5228f69640c2d       019ee182b58e2                                                                                         9 minutes ago        Running             kube-controller-manager   0                   517f5b55c25ac       kube-controller-manager-ha-335100
	25b39e8ce1a49       2b0d6572d062c                                                                                         9 minutes ago        Running             kube-scheduler            0                   b0b115a752128       kube-scheduler-ha-335100
	256becfc62338       95c0bda56fc4d                                                                                         9 minutes ago        Running             kube-apiserver            0                   c99f6d2953c5b       kube-apiserver-ha-335100
	22c1f77dda7a3       a9e7e6b294baf                                                                                         9 minutes ago        Running             etcd                      0                   dcea40235e346       etcd-ha-335100
	
	
	==> coredns [4fd9a115fcda] <==
	[INFO] 10.244.1.2:42435 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000330903s
	[INFO] 10.244.1.2:42112 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000180901s
	[INFO] 10.244.1.2:35959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000287602s
	[INFO] 10.244.2.2:36308 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000181401s
	[INFO] 10.244.2.2:49545 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150601s
	[INFO] 10.244.2.2:37477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119001s
	[INFO] 10.244.2.2:49576 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161102s
	[INFO] 10.244.2.2:53438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148201s
	[INFO] 10.244.0.4:39119 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233802s
	[INFO] 10.244.0.4:33066 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000291503s
	[INFO] 10.244.0.4:42248 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000195401s
	[INFO] 10.244.1.2:42786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000441003s
	[INFO] 10.244.1.2:52892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000269702s
	[INFO] 10.244.1.2:36279 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129801s
	[INFO] 10.244.1.2:37975 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147601s
	[INFO] 10.244.0.4:42121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218902s
	[INFO] 10.244.0.4:55956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000315903s
	[INFO] 10.244.0.4:39006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132201s
	[INFO] 10.244.0.4:53772 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151802s
	[INFO] 10.244.1.2:60679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204102s
	[INFO] 10.244.1.2:45025 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127801s
	[INFO] 10.244.1.2:42238 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158902s
	[INFO] 10.244.1.2:53719 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000171801s
	[INFO] 10.244.2.2:55195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357103s
	[INFO] 10.244.2.2:40415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187202s
	
	
	==> coredns [e7ca26bd041b] <==
	[INFO] 10.244.0.4:59785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000444503s
	[INFO] 10.244.0.4:56785 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.322386012s
	[INFO] 10.244.0.4:49790 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.060760012s
	[INFO] 10.244.0.4:37219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.054123172s
	[INFO] 10.244.1.2:44423 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299503s
	[INFO] 10.244.1.2:58000 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000173802s
	[INFO] 10.244.2.2:46499 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000212201s
	[INFO] 10.244.2.2:46942 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000139401s
	[INFO] 10.244.0.4:51006 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189902s
	[INFO] 10.244.0.4:53888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134701s
	[INFO] 10.244.0.4:60890 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000251802s
	[INFO] 10.244.1.2:52016 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017914651s
	[INFO] 10.244.1.2:42358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244602s
	[INFO] 10.244.1.2:35935 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190502s
	[INFO] 10.244.1.2:60194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148801s
	[INFO] 10.244.2.2:36357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167801s
	[INFO] 10.244.2.2:56554 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110643534s
	[INFO] 10.244.2.2:53676 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097501s
	[INFO] 10.244.0.4:54207 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265102s
	[INFO] 10.244.2.2:52348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132401s
	[INFO] 10.244.2.2:55759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255302s
	[INFO] 10.244.2.2:33661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000446004s
	[INFO] 10.244.2.2:58546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000851s
	[INFO] 10.244.2.2:41756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147702s
	[INFO] 10.244.2.2:57463 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000312502s
	
	
	==> describe nodes <==
	Name:               ha-335100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T10_59_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 10:59:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:09:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:08:16 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:08:16 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:08:16 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:08:16 +0000   Mon, 10 Feb 2025 11:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.136.99
	  Hostname:    ha-335100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6b0e4336344490cabbe3838ec21fcfa
	  System UUID:                880d7589-4827-264e-a5a8-fd64393ef394
	  Boot ID:                    4de3dd87-b349-4fcc-a75e-64fd8a7b6e07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-5px7z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 coredns-668d6bf9bc-gc5gf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m19s
	  kube-system                 coredns-668d6bf9bc-s44gp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m19s
	  kube-system                 etcd-ha-335100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m24s
	  kube-system                 kindnet-hpmm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m19s
	  kube-system                 kube-apiserver-ha-335100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-controller-manager-ha-335100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-proxy-xzs7w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-scheduler-ha-335100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 kube-vip-ha-335100                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m16s  kube-proxy       
	  Normal  Starting                 9m24s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m24s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m24s  kubelet          Node ha-335100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m24s  kubelet          Node ha-335100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m24s  kubelet          Node ha-335100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m20s  node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	  Normal  NodeReady                8m55s  kubelet          Node ha-335100 status is now: NodeReady
	  Normal  RegisteredNode           5m49s  node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	  Normal  RegisteredNode           2m6s   node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	
	
	Name:               ha-335100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T11_03_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:03:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:09:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:08:15 +0000   Mon, 10 Feb 2025 11:03:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:08:15 +0000   Mon, 10 Feb 2025 11:03:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:08:15 +0000   Mon, 10 Feb 2025 11:03:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:08:15 +0000   Mon, 10 Feb 2025 11:03:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.139.212
	  Hostname:    ha-335100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 360d1205d30e4b4489e68c8ddd033d40
	  System UUID:                021fcea2-6be6-324c-9cb2-94399cbeee0d
	  Boot ID:                    d76bcd42-d2fc-4cdc-92b9-b38a76650906
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-r8blr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-335100-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m59s
	  kube-system                 kindnet-slpqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m
	  kube-system                 kube-apiserver-ha-335100-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-controller-manager-ha-335100-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-b5xnq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-scheduler-ha-335100-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-vip-ha-335100-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 5m55s            kube-proxy       
	  Normal  RegisteredNode           6m               node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)  kubelet          Node ha-335100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)  kubelet          Node ha-335100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)  kubelet          Node ha-335100-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m49s            node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	  Normal  RegisteredNode           2m6s             node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	
	
	Name:               ha-335100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T11_06_59_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:06:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:09:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:08:23 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:08:23 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:08:23 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:08:23 +0000   Mon, 10 Feb 2025 11:07:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.143.243
	  Hostname:    ha-335100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3932c0d931034d08a153f36a4dde5a97
	  System UUID:                9c6bee06-3b2b-5b49-bb4d-c446daaf4d5e
	  Boot ID:                    bbfe75b5-4310-4a0c-8e67-52ceff178ebb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-vq9s4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-335100-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m16s
	  kube-system                 kindnet-lc7hv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m17s
	  kube-system                 kube-apiserver-ha-335100-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-controller-manager-ha-335100-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-b9g27                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-ha-335100-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-vip-ha-335100-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m12s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m17s (x8 over 2m17s)  kubelet          Node ha-335100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m17s (x8 over 2m17s)  kubelet          Node ha-335100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m17s (x7 over 2m17s)  kubelet          Node ha-335100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m15s                  node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	  Normal  RegisteredNode           2m14s                  node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	  Normal  RegisteredNode           2m6s                   node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	
	
	==> dmesg <==
	[  +7.376728] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 10:58] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.170548] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Feb10 10:59] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +0.106671] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.492197] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.194850] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.214117] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.870895] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.192284] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.221120] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.252867] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[ +10.665269] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.104168] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.748464] systemd-fstab-generator[1699]: Ignoring "noauto" option for root device
	[  +7.461490] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[  +0.103821] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.762085] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.810542] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +7.325683] kauditd_printk_skb: 17 callbacks suppressed
	[Feb10 11:00] kauditd_printk_skb: 29 callbacks suppressed
	[Feb10 11:02] hrtimer: interrupt took 8772094 ns
	[Feb10 11:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [22c1f77dda7a] <==
	{"level":"warn","ts":"2025-02-10T11:06:57.889915Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.370936ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15420207405568972272 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:3660-second id:55ff94ef83c955ef>","response":"size:41"}
	{"level":"info","ts":"2025-02-10T11:06:57.890618Z","caller":"traceutil/trace.go:171","msg":"trace[1557185917] transaction","detail":"{read_only:false; response_revision:1427; number_of_response:1; }","duration":"250.839326ms","start":"2025-02-10T11:06:57.639763Z","end":"2025-02-10T11:06:57.890602Z","steps":["trace[1557185917] 'process raft request'  (duration: 250.430118ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T11:06:57.890987Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T11:06:57.552738Z","time spent":"338.245377ms","remote":"127.0.0.1:58782","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2025-02-10T11:06:57.891433Z","caller":"traceutil/trace.go:171","msg":"trace[1710075117] transaction","detail":"{read_only:false; response_revision:1426; number_of_response:1; }","duration":"298.94409ms","start":"2025-02-10T11:06:57.592476Z","end":"2025-02-10T11:06:57.891420Z","steps":["trace[1710075117] 'process raft request'  (duration: 297.593763ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T11:06:57.942579Z","caller":"traceutil/trace.go:171","msg":"trace[1309646826] linearizableReadLoop","detail":"{readStateIndex:1589; appliedIndex:1592; }","duration":"236.434737ms","start":"2025-02-10T11:06:57.706061Z","end":"2025-02-10T11:06:57.942496Z","steps":["trace[1309646826] 'read index received'  (duration: 236.431437ms)","trace[1309646826] 'applied index is now lower than readState.Index'  (duration: 2.8µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T11:06:57.942839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"236.756543ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T11:06:57.942997Z","caller":"traceutil/trace.go:171","msg":"trace[73883024] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1427; }","duration":"236.956248ms","start":"2025-02-10T11:06:57.706029Z","end":"2025-02-10T11:06:57.942986Z","steps":["trace[73883024] 'agreement among raft nodes before linearized reading'  (duration: 236.686243ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T11:06:58.005626Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.81312ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T11:06:58.005977Z","caller":"traceutil/trace.go:171","msg":"trace[2012989823] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1427; }","duration":"116.178327ms","start":"2025-02-10T11:06:57.889783Z","end":"2025-02-10T11:06:58.005961Z","steps":["trace[2012989823] 'agreement among raft nodes before linearized reading'  (duration: 113.780679ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T11:06:58.006287Z","caller":"traceutil/trace.go:171","msg":"trace[2040053028] transaction","detail":"{read_only:false; response_revision:1428; number_of_response:1; }","duration":"112.662357ms","start":"2025-02-10T11:06:57.893611Z","end":"2025-02-10T11:06:58.006273Z","steps":["trace[2040053028] 'process raft request'  (duration: 109.795799ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T11:06:58.433460Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"98d9005d4e04d5ff switched to configuration voters=(4692479509866941232 4803948389299335758 11013834764452156927)"}
	{"level":"info","ts":"2025-02-10T11:06:58.433802Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"447d2b71a2119e86","local-member-id":"98d9005d4e04d5ff"}
	{"level":"info","ts":"2025-02-10T11:06:58.434017Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"98d9005d4e04d5ff","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"42ab0d9b8f7f324e"}
	{"level":"warn","ts":"2025-02-10T11:07:05.585321Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"42ab0d9b8f7f324e","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"33.426969ms"}
	{"level":"warn","ts":"2025-02-10T11:07:05.585394Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"411f09409d6b0b30","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"33.50457ms"}
	{"level":"info","ts":"2025-02-10T11:07:05.628259Z","caller":"traceutil/trace.go:171","msg":"trace[1033008385] linearizableReadLoop","detail":"{readStateIndex:1661; appliedIndex:1661; }","duration":"111.289678ms","start":"2025-02-10T11:07:05.516903Z","end":"2025-02-10T11:07:05.628193Z","steps":["trace[1033008385] 'read index received'  (duration: 111.284078ms)","trace[1033008385] 'applied index is now lower than readState.Index'  (duration: 4µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T11:07:05.628453Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.544584ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-335100-m03\" limit:1 ","response":"range_response_count:1 size:4376"}
	{"level":"info","ts":"2025-02-10T11:07:05.628480Z","caller":"traceutil/trace.go:171","msg":"trace[1052223829] range","detail":"{range_begin:/registry/minions/ha-335100-m03; range_end:; response_count:1; response_revision:1489; }","duration":"111.620184ms","start":"2025-02-10T11:07:05.516852Z","end":"2025-02-10T11:07:05.628472Z","steps":["trace[1052223829] 'agreement among raft nodes before linearized reading'  (duration: 111.472782ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T11:07:05.628523Z","caller":"traceutil/trace.go:171","msg":"trace[520189642] transaction","detail":"{read_only:false; response_revision:1490; number_of_response:1; }","duration":"255.664304ms","start":"2025-02-10T11:07:05.372845Z","end":"2025-02-10T11:07:05.628509Z","steps":["trace[520189642] 'process raft request'  (duration: 255.553102ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T11:07:06.278311Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"42ab0d9b8f7f324e","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"26.047912ms"}
	{"level":"warn","ts":"2025-02-10T11:07:06.278489Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"411f09409d6b0b30","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"26.232816ms"}
	{"level":"info","ts":"2025-02-10T11:07:06.279548Z","caller":"traceutil/trace.go:171","msg":"trace[2107327032] linearizableReadLoop","detail":"{readStateIndex:1663; appliedIndex:1663; }","duration":"146.146552ms","start":"2025-02-10T11:07:06.133382Z","end":"2025-02-10T11:07:06.279528Z","steps":["trace[2107327032] 'read index received'  (duration: 146.141952ms)","trace[2107327032] 'applied index is now lower than readState.Index'  (duration: 3.2µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T11:07:06.342588Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.179182ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T11:07:06.342670Z","caller":"traceutil/trace.go:171","msg":"trace[246915509] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions/; range_end:/registry/apiextensions.k8s.io/customresourcedefinitions0; response_count:0; response_revision:1491; }","duration":"209.302285ms","start":"2025-02-10T11:07:06.133350Z","end":"2025-02-10T11:07:06.342652Z","steps":["trace[246915509] 'agreement among raft nodes before linearized reading'  (duration: 146.294355ms)","trace[246915509] 'count revisions from in-memory index tree'  (duration: 62.861427ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T11:07:06.343262Z","caller":"traceutil/trace.go:171","msg":"trace[706225323] transaction","detail":"{read_only:false; response_revision:1492; number_of_response:1; }","duration":"264.025453ms","start":"2025-02-10T11:07:06.079183Z","end":"2025-02-10T11:07:06.343208Z","steps":["trace[706225323] 'process raft request'  (duration: 199.250489ms)","trace[706225323] 'compare'  (duration: 64.54886ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:09:09 up 11 min,  0 users,  load average: 0.79, 0.76, 0.44
	Linux ha-335100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [22d0df1da0c6] <==
	I0210 11:08:21.713058       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:08:31.715167       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:08:31.715236       1 main.go:301] handling current node
	I0210 11:08:31.715257       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:08:31.715264       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:08:31.715819       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:08:31.715908       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:08:41.716513       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:08:41.716575       1 main.go:301] handling current node
	I0210 11:08:41.716905       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:08:41.716997       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:08:41.717325       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:08:41.717426       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:08:51.716925       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:08:51.717027       1 main.go:301] handling current node
	I0210 11:08:51.717060       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:08:51.717080       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:08:51.717314       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:08:51.717326       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:09:01.709405       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:09:01.709518       1 main.go:301] handling current node
	I0210 11:09:01.709540       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:09:01.709548       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:09:01.710211       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:09:01.710326       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [256becfc6233] <==
	E0210 11:03:10.009150       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:03:10.011523       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0210 11:03:10.012842       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:03:10.016987       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 9.601µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0210 11:03:10.187074       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="91.71043ms" method="POST" path="/api/v1/namespaces/kube-system/pods" result=null
	E0210 11:06:53.245605       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:06:53.245670       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.601µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0210 11:06:53.247107       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0210 11:06:53.248311       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:06:53.250712       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.296108ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-335100-m03.1822d4198e00ff44" result=null
	E0210 11:08:11.617837       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56657: use of closed network connection
	E0210 11:08:12.157552       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56659: use of closed network connection
	E0210 11:08:13.832598       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56661: use of closed network connection
	E0210 11:08:14.825804       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56663: use of closed network connection
	E0210 11:08:15.314083       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56666: use of closed network connection
	E0210 11:08:15.921583       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56668: use of closed network connection
	E0210 11:08:16.415611       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56670: use of closed network connection
	E0210 11:08:16.890132       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56672: use of closed network connection
	E0210 11:08:17.347465       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56674: use of closed network connection
	E0210 11:08:18.182983       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56677: use of closed network connection
	E0210 11:08:28.641180       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56679: use of closed network connection
	E0210 11:08:29.104109       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56681: use of closed network connection
	E0210 11:08:39.574461       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56683: use of closed network connection
	E0210 11:08:40.038689       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56686: use of closed network connection
	E0210 11:08:50.510550       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56688: use of closed network connection
	
	
	==> kube-controller-manager [5228f69640c2] <==
	I0210 11:06:59.421840       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:06:59.623454       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:02.683872       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:03.934211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:04.004019       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:22.675743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:22.710346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:07:23.991173       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:08:04.050456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="206.942908ms"
	I0210 11:08:04.228167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="177.638906ms"
	I0210 11:08:04.661398       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="431.129797ms"
	I0210 11:08:04.713577       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="52.120182ms"
	I0210 11:08:04.714413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="58.901µs"
	I0210 11:08:04.969385       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="141.573696ms"
	I0210 11:08:04.969579       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.302µs"
	I0210 11:08:05.608014       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="85.501µs"
	I0210 11:08:07.193036       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="56.605275ms"
	I0210 11:08:07.193727       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="154.202µs"
	I0210 11:08:07.342076       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.243407ms"
	I0210 11:08:07.342410       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="72.101µs"
	I0210 11:08:08.848448       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="50.173622ms"
	I0210 11:08:08.849321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="35.5µs"
	I0210 11:08:15.754596       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:08:16.770092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100"
	I0210 11:08:23.686665       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	
	
	==> kube-proxy [f1c556132095] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 10:59:52.961920       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 10:59:52.974540       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.99"]
	E0210 10:59:52.974683       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 10:59:53.036518       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 10:59:53.036649       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 10:59:53.036679       1 server_linux.go:170] "Using iptables Proxier"
	I0210 10:59:53.040803       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 10:59:53.041779       1 server.go:497] "Version info" version="v1.32.1"
	I0210 10:59:53.041810       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 10:59:53.043672       1 config.go:199] "Starting service config controller"
	I0210 10:59:53.043823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 10:59:53.043872       1 config.go:105] "Starting endpoint slice config controller"
	I0210 10:59:53.043878       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 10:59:53.044673       1 config.go:329] "Starting node config controller"
	I0210 10:59:53.044706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 10:59:53.145005       1 shared_informer.go:320] Caches are synced for node config
	I0210 10:59:53.145058       1 shared_informer.go:320] Caches are synced for service config
	I0210 10:59:53.145075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [25b39e8ce1a4] <==
	W0210 10:59:43.160081       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 10:59:43.161546       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.163719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 10:59:43.164004       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.166439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.166532       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.185428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 10:59:43.185484       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.306774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 10:59:43.306820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.395654       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 10:59:43.396103       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 10:59:43.425536       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 10:59:43.426275       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.464601       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0210 10:59:43.464900       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.546333       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.546379       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.571428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.571487       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.633107       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 10:59:43.633211       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.661207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 10:59:43.661369       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 10:59:45.218303       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 11:04:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:04:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:05:45 ha-335100 kubelet[2375]: E0210 11:05:45.621374    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:05:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:05:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:05:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:05:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:06:45 ha-335100 kubelet[2375]: E0210 11:06:45.621894    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:06:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:06:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:06:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:06:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:07:45 ha-335100 kubelet[2375]: E0210 11:07:45.623370    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:07:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:07:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:07:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:07:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:08:04 ha-335100 kubelet[2375]: I0210 11:08:04.015974    2375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gc5gf" podStartSLOduration=494.011465241 podStartE2EDuration="8m14.011465241s" podCreationTimestamp="2025-02-10 10:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 11:00:17.471964173 +0000 UTC m=+32.181623717" watchObservedRunningTime="2025-02-10 11:08:04.011465241 +0000 UTC m=+498.721124885"
	Feb 10 11:08:04 ha-335100 kubelet[2375]: I0210 11:08:04.120475    2375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7p2v\" (UniqueName: \"kubernetes.io/projected/062879fb-cc1a-4d5f-af24-b4bab150a326-kube-api-access-x7p2v\") pod \"busybox-58667487b6-5px7z\" (UID: \"062879fb-cc1a-4d5f-af24-b4bab150a326\") " pod="default/busybox-58667487b6-5px7z"
	Feb 10 11:08:05 ha-335100 kubelet[2375]: I0210 11:08:05.057380    2375 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f44cef7fce909445d3740b68b1d8a594c199ae7ab48880497e2640bc09f9ede6"
	Feb 10 11:08:45 ha-335100 kubelet[2375]: E0210 11:08:45.622273    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:08:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:08:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:08:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:08:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-335100 -n ha-335100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-335100 -n ha-335100: (11.2552518s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-335100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (65.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (153.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 node start m02 -v=7 --alsologtostderr: exit status 1 (1m9.8165456s)

                                                
                                                
-- stdout --
	* Starting "ha-335100-m02" control-plane node in "ha-335100" cluster
	* Restarting existing hyperv VM for "ha-335100-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:25:40.188644    6468 out.go:345] Setting OutFile to fd 1756 ...
	I0210 11:25:40.277735    6468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:25:40.277735    6468 out.go:358] Setting ErrFile to fd 1484...
	I0210 11:25:40.277808    6468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:25:40.290245    6468 mustload.go:65] Loading cluster: ha-335100
	I0210 11:25:40.291805    6468 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:25:40.292071    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:25:42.240329    6468 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 11:25:42.240329    6468 main.go:141] libmachine: [stderr =====>] : 
	W0210 11:25:42.240329    6468 host.go:58] "ha-335100-m02" host status: Stopped
	I0210 11:25:42.244267    6468 out.go:177] * Starting "ha-335100-m02" control-plane node in "ha-335100" cluster
	I0210 11:25:42.246131    6468 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:25:42.246131    6468 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 11:25:42.246131    6468 cache.go:56] Caching tarball of preloaded images
	I0210 11:25:42.246667    6468 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:25:42.246905    6468 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:25:42.246905    6468 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:25:42.249035    6468 start.go:360] acquireMachinesLock for ha-335100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:25:42.249035    6468 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100-m02"
	I0210 11:25:42.249035    6468 start.go:96] Skipping create...Using existing machine configuration
	I0210 11:25:42.249035    6468 fix.go:54] fixHost starting: m02
	I0210 11:25:42.249744    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:25:44.221162    6468 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 11:25:44.221162    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:44.221162    6468 fix.go:112] recreateIfNeeded on ha-335100-m02: state=Stopped err=<nil>
	W0210 11:25:44.221162    6468 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 11:25:44.224288    6468 out.go:177] * Restarting existing hyperv VM for "ha-335100-m02" ...
	I0210 11:25:44.226462    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m02
	I0210 11:25:47.031439    6468 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:25:47.031439    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:47.031439    6468 main.go:141] libmachine: Waiting for host to start...
	I0210 11:25:47.031525    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:25:49.128916    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:25:49.128916    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:49.128916    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:25:51.473444    6468 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:25:51.473444    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:52.474745    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:25:54.570393    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:25:54.570393    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:54.570493    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:25:56.836682    6468 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:25:56.837213    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:57.838329    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:25:59.841812    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:25:59.841812    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:59.841812    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:02.195380    6468 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:26:02.195380    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:03.195987    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:05.291201    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:05.291916    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:05.292019    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:07.641175    6468 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:26:07.641175    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:08.641712    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:10.716609    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:10.716609    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:10.716876    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:13.197175    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:13.197175    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:13.199763    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:15.180604    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:15.181317    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:15.181398    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:17.553650    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:17.553650    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:17.553650    6468 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:26:17.556462    6468 machine.go:93] provisionDockerMachine start ...
	I0210 11:26:17.556535    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:19.583093    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:19.583093    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:19.583093    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:21.961424    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:21.961502    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:21.966064    6468 main.go:141] libmachine: Using SSH client type: native
	I0210 11:26:21.966275    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
	I0210 11:26:21.966275    6468 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:26:22.102029    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:26:22.102102    6468 buildroot.go:166] provisioning hostname "ha-335100-m02"
	I0210 11:26:22.102248    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:24.099637    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:24.099637    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:24.099637    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:26.465326    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:26.466326    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:26.470138    6468 main.go:141] libmachine: Using SSH client type: native
	I0210 11:26:26.470628    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
	I0210 11:26:26.470628    6468 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100-m02 && echo "ha-335100-m02" | sudo tee /etc/hostname
	I0210 11:26:26.637750    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m02
	
	I0210 11:26:26.637852    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:28.621381    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:28.621598    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:28.621598    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:30.974556    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:30.974556    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:30.980811    6468 main.go:141] libmachine: Using SSH client type: native
	I0210 11:26:30.981349    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
	I0210 11:26:30.981349    6468 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:26:31.135425    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:26:31.135425    6468 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:26:31.135425    6468 buildroot.go:174] setting up certificates
	I0210 11:26:31.135425    6468 provision.go:84] configureAuth start
	I0210 11:26:31.135425    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:33.132570    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:33.132570    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:33.133204    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:35.504650    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:35.504650    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:35.504730    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:37.443349    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:37.443423    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:37.443503    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:39.804594    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:39.805625    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:39.805748    6468 provision.go:143] copyHostCerts
	I0210 11:26:39.805862    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:26:39.805862    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:26:39.805862    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:26:39.806398    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:26:39.807091    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:26:39.807091    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:26:39.807091    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:26:39.807739    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:26:39.808612    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:26:39.808756    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:26:39.808819    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:26:39.809136    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:26:39.809849    6468 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m02 san=[127.0.0.1 172.29.143.36 ha-335100-m02 localhost minikube]
	I0210 11:26:39.922637    6468 provision.go:177] copyRemoteCerts
	I0210 11:26:39.932426    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:26:39.932426    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:41.882860    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:41.883009    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:41.883009    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:44.220373    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:44.220373    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:44.221506    6468 sshutil.go:53] new ssh client: &{IP:172.29.143.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:26:44.335745    6468 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4032687s)
	I0210 11:26:44.335745    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:26:44.336371    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:26:44.384509    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:26:44.384873    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:26:44.429793    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:26:44.430126    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:26:44.475802    6468 provision.go:87] duration metric: took 13.340226s to configureAuth
	I0210 11:26:44.475802    6468 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:26:44.476736    6468 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:26:44.476825    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:26:46.421358    6468 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:26:46.421358    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:46.421438    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:26:48.765686    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36
	
	I0210 11:26:48.765686    6468 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:26:48.770000    6468 main.go:141] libmachine: Using SSH client type: native
	I0210 11:26:48.770460    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
	I0210 11:26:48.770460    6468 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:26:48.903151    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:26:48.903151    6468 buildroot.go:70] root file system type: tmpfs
	I0210 11:26:48.903151    6468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:26:48.903151    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state

                                                
                                                
** /stderr **
ha_test.go:424: I0210 11:25:40.188644    6468 out.go:345] Setting OutFile to fd 1756 ...
I0210 11:25:40.277735    6468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:25:40.277735    6468 out.go:358] Setting ErrFile to fd 1484...
I0210 11:25:40.277808    6468 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:25:40.290245    6468 mustload.go:65] Loading cluster: ha-335100
I0210 11:25:40.291805    6468 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 11:25:40.292071    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:25:42.240329    6468 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0210 11:25:42.240329    6468 main.go:141] libmachine: [stderr =====>] : 
W0210 11:25:42.240329    6468 host.go:58] "ha-335100-m02" host status: Stopped
I0210 11:25:42.244267    6468 out.go:177] * Starting "ha-335100-m02" control-plane node in "ha-335100" cluster
I0210 11:25:42.246131    6468 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0210 11:25:42.246131    6468 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
I0210 11:25:42.246131    6468 cache.go:56] Caching tarball of preloaded images
I0210 11:25:42.246667    6468 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0210 11:25:42.246905    6468 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
I0210 11:25:42.246905    6468 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
I0210 11:25:42.249035    6468 start.go:360] acquireMachinesLock for ha-335100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0210 11:25:42.249035    6468 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100-m02"
I0210 11:25:42.249035    6468 start.go:96] Skipping create...Using existing machine configuration
I0210 11:25:42.249035    6468 fix.go:54] fixHost starting: m02
I0210 11:25:42.249744    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:25:44.221162    6468 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0210 11:25:44.221162    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:44.221162    6468 fix.go:112] recreateIfNeeded on ha-335100-m02: state=Stopped err=<nil>
W0210 11:25:44.221162    6468 fix.go:138] unexpected machine state, will restart: <nil>
I0210 11:25:44.224288    6468 out.go:177] * Restarting existing hyperv VM for "ha-335100-m02" ...
I0210 11:25:44.226462    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m02
I0210 11:25:47.031439    6468 main.go:141] libmachine: [stdout =====>] : 
I0210 11:25:47.031439    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:47.031439    6468 main.go:141] libmachine: Waiting for host to start...
I0210 11:25:47.031525    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:25:49.128916    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:25:49.128916    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:49.128916    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:25:51.473444    6468 main.go:141] libmachine: [stdout =====>] : 
I0210 11:25:51.473444    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:52.474745    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:25:54.570393    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:25:54.570393    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:54.570493    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:25:56.836682    6468 main.go:141] libmachine: [stdout =====>] : 
I0210 11:25:56.837213    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:57.838329    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:25:59.841812    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:25:59.841812    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:25:59.841812    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:02.195380    6468 main.go:141] libmachine: [stdout =====>] : 
I0210 11:26:02.195380    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:03.195987    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:05.291201    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:05.291916    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:05.292019    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:07.641175    6468 main.go:141] libmachine: [stdout =====>] : 
I0210 11:26:07.641175    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:08.641712    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:10.716609    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:10.716609    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:10.716876    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:13.197175    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:13.197175    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:13.199763    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:15.180604    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:15.181317    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:15.181398    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:17.553650    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:17.553650    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:17.553650    6468 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
I0210 11:26:17.556462    6468 machine.go:93] provisionDockerMachine start ...
I0210 11:26:17.556535    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:19.583093    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:19.583093    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:19.583093    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:21.961424    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:21.961502    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:21.966064    6468 main.go:141] libmachine: Using SSH client type: native
I0210 11:26:21.966275    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
I0210 11:26:21.966275    6468 main.go:141] libmachine: About to run SSH command:
hostname
I0210 11:26:22.102029    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0210 11:26:22.102102    6468 buildroot.go:166] provisioning hostname "ha-335100-m02"
I0210 11:26:22.102248    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:24.099637    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:24.099637    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:24.099637    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:26.465326    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:26.466326    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:26.470138    6468 main.go:141] libmachine: Using SSH client type: native
I0210 11:26:26.470628    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
I0210 11:26:26.470628    6468 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-335100-m02 && echo "ha-335100-m02" | sudo tee /etc/hostname
I0210 11:26:26.637750    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m02

                                                
                                                
I0210 11:26:26.637852    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:28.621381    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:28.621598    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:28.621598    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:30.974556    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:30.974556    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:30.980811    6468 main.go:141] libmachine: Using SSH client type: native
I0210 11:26:30.981349    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
I0210 11:26:30.981349    6468 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-335100-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-335100-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0210 11:26:31.135425    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0210 11:26:31.135425    6468 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0210 11:26:31.135425    6468 buildroot.go:174] setting up certificates
I0210 11:26:31.135425    6468 provision.go:84] configureAuth start
I0210 11:26:31.135425    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:33.132570    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:33.132570    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:33.133204    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:35.504650    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:35.504650    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:35.504730    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:37.443349    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:37.443423    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:37.443503    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:39.804594    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:39.805625    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:39.805748    6468 provision.go:143] copyHostCerts
I0210 11:26:39.805862    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0210 11:26:39.805862    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0210 11:26:39.805862    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0210 11:26:39.806398    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
I0210 11:26:39.807091    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0210 11:26:39.807091    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0210 11:26:39.807091    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0210 11:26:39.807739    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0210 11:26:39.808612    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0210 11:26:39.808756    6468 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0210 11:26:39.808819    6468 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0210 11:26:39.809136    6468 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
I0210 11:26:39.809849    6468 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m02 san=[127.0.0.1 172.29.143.36 ha-335100-m02 localhost minikube]
I0210 11:26:39.922637    6468 provision.go:177] copyRemoteCerts
I0210 11:26:39.932426    6468 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0210 11:26:39.932426    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:41.882860    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:41.883009    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:41.883009    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:44.220373    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:44.220373    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:44.221506    6468 sshutil.go:53] new ssh client: &{IP:172.29.143.36 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
I0210 11:26:44.335745    6468 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4032687s)
I0210 11:26:44.335745    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0210 11:26:44.336371    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0210 11:26:44.384509    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0210 11:26:44.384873    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0210 11:26:44.429793    6468 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0210 11:26:44.430126    6468 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0210 11:26:44.475802    6468 provision.go:87] duration metric: took 13.340226s to configureAuth
I0210 11:26:44.475802    6468 buildroot.go:189] setting minikube options for container-runtime
I0210 11:26:44.476736    6468 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 11:26:44.476825    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
I0210 11:26:46.421358    6468 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 11:26:46.421358    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:46.421438    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
I0210 11:26:48.765686    6468 main.go:141] libmachine: [stdout =====>] : 172.29.143.36

                                                
                                                
I0210 11:26:48.765686    6468 main.go:141] libmachine: [stderr =====>] : 
I0210 11:26:48.770000    6468 main.go:141] libmachine: Using SSH client type: native
I0210 11:26:48.770460    6468 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.36 22 <nil> <nil>}
I0210 11:26:48.770460    6468 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0210 11:26:48.903151    6468 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0210 11:26:48.903151    6468 buildroot.go:70] root file system type: tmpfs
I0210 11:26:48.903151    6468 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0210 11:26:48.903151    6468 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-335100 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:26:49.906615   11764 retry.go:31] will retry after 685.730595ms: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:26:50.593049   11764 retry.go:31] will retry after 1.198566818s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:26:51.792700   11764 retry.go:31] will retry after 2.043278621s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (451.8µs)
I0210 11:26:53.837075   11764 retry.go:31] will retry after 4.968323253s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:26:58.806451   11764 retry.go:31] will retry after 7.361233687s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:27:06.169524   11764 retry.go:31] will retry after 10.577490345s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:27:16.748330   11764 retry.go:31] will retry after 10.04569811s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (0s)
I0210 11:27:26.794932   11764 retry.go:31] will retry after 14.363274228s: context deadline exceeded
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: context deadline exceeded (415µs)
ha_test.go:434: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-335100 -n ha-335100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-335100 -n ha-335100: (11.1747825s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 logs -n 25: (8.0089845s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:19 UTC | 10 Feb 25 11:19 UTC |
	|         | ha-335100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:19 UTC | 10 Feb 25 11:20 UTC |
	|         | ha-335100:/home/docker/cp-test_ha-335100-m03_ha-335100.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:20 UTC |
	|         | ha-335100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100 sudo cat                                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:20 UTC |
	|         | /home/docker/cp-test_ha-335100-m03_ha-335100.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:20 UTC |
	|         | ha-335100-m02:/home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:20 UTC |
	|         | ha-335100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100-m02 sudo cat                                                                                   | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:20 UTC | 10 Feb 25 11:21 UTC |
	|         | /home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:21 UTC |
	|         | ha-335100-m04:/home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:21 UTC |
	|         | ha-335100-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100-m04 sudo cat                                                                                   | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:21 UTC |
	|         | /home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-335100 cp testdata\cp-test.txt                                                                                         | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:21 UTC |
	|         | ha-335100-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:21 UTC |
	|         | ha-335100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:21 UTC | 10 Feb 25 11:22 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:22 UTC | 10 Feb 25 11:22 UTC |
	|         | ha-335100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:22 UTC | 10 Feb 25 11:22 UTC |
	|         | ha-335100:/home/docker/cp-test_ha-335100-m04_ha-335100.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:22 UTC | 10 Feb 25 11:22 UTC |
	|         | ha-335100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100 sudo cat                                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:22 UTC | 10 Feb 25 11:22 UTC |
	|         | /home/docker/cp-test_ha-335100-m04_ha-335100.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:22 UTC | 10 Feb 25 11:23 UTC |
	|         | ha-335100-m02:/home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:23 UTC |
	|         | ha-335100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100-m02 sudo cat                                                                                   | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:23 UTC |
	|         | /home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt                                                                       | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:23 UTC |
	|         | ha-335100-m03:/home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n                                                                                                          | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:23 UTC |
	|         | ha-335100-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-335100 ssh -n ha-335100-m03 sudo cat                                                                                   | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:23 UTC |
	|         | /home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-335100 node stop m02 -v=7                                                                                              | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:23 UTC | 10 Feb 25 11:24 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-335100 node start m02 -v=7                                                                                             | ha-335100 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:25 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:56:50
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:56:49.955540    8716 out.go:345] Setting OutFile to fd 1996 ...
	I0210 10:56:50.006508    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:56:50.006508    8716 out.go:358] Setting ErrFile to fd 1984...
	I0210 10:56:50.006508    8716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:56:50.024054    8716 out.go:352] Setting JSON to false
	I0210 10:56:50.027021    8716 start.go:129] hostinfo: {"hostname":"minikube5","uptime":186349,"bootTime":1738998660,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:56:50.027478    8716 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:56:50.035049    8716 out.go:177] * [ha-335100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:56:50.039906    8716 notify.go:220] Checking for updates...
	I0210 10:56:50.039906    8716 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:56:50.041984    8716 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:56:50.044508    8716 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:56:50.046572    8716 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:56:50.047895    8716 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:56:50.050873    8716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:56:55.110640    8716 out.go:177] * Using the hyperv driver based on user configuration
	I0210 10:56:55.115381    8716 start.go:297] selected driver: hyperv
	I0210 10:56:55.115381    8716 start.go:901] validating driver "hyperv" against <nil>
	I0210 10:56:55.115381    8716 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 10:56:55.158791    8716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:56:55.160028    8716 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 10:56:55.160028    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:56:55.160028    8716 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0210 10:56:55.160028    8716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 10:56:55.160674    8716 start.go:340] cluster config:
	{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0210 10:56:55.160674    8716 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:56:55.167125    8716 out.go:177] * Starting "ha-335100" primary control-plane node in "ha-335100" cluster
	I0210 10:56:55.169736    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:56:55.169736    8716 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 10:56:55.169736    8716 cache.go:56] Caching tarball of preloaded images
	I0210 10:56:55.169736    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 10:56:55.170719    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 10:56:55.170888    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:56:55.171297    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json: {Name:mk7fd8b1cba562e1df25fb8b2e8a3cb78306b0bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:56:55.171861    8716 start.go:360] acquireMachinesLock for ha-335100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 10:56:55.171861    8716 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100"
	I0210 10:56:55.172521    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:56:55.172521    8716 start.go:125] createHost starting for "" (driver="hyperv")
	I0210 10:56:55.175257    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 10:56:55.175590    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 10:56:55.175662    8716 client.go:168] LocalClient.Create starting
	I0210 10:56:55.176122    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 10:56:55.176349    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:56:55.176373    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 10:56:57.116565    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 10:56:57.117582    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:56:57.117942    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 10:56:58.697687    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 10:56:58.697687    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:56:58.698651    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 10:57:00.114245    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 10:57:00.114854    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:00.114854    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 10:57:03.469545    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 10:57:03.469545    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:03.471404    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 10:57:03.881780    8716 main.go:141] libmachine: Creating SSH key...
	I0210 10:57:04.097693    8716 main.go:141] libmachine: Creating VM...
	I0210 10:57:04.097693    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 10:57:06.679314    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 10:57:06.679314    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:06.679892    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 10:57:06.680149    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 10:57:08.327377    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 10:57:08.327377    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:08.328509    8716 main.go:141] libmachine: Creating VHD
	I0210 10:57:08.328606    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 10:57:11.873284    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : A514F38A-CFB8-4A84-B862-9E0C60ED9E44
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 10:57:11.873521    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:11.873521    8716 main.go:141] libmachine: Writing magic tar header
	I0210 10:57:11.873521    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 10:57:11.886984    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 10:57:14.914284    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:14.914284    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:14.914689    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd' -SizeBytes 20000MB
	I0210 10:57:17.312226    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:17.312226    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:17.312918    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 10:57:20.689908    8716 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-335100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 10:57:20.689908    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:20.690640    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100 -DynamicMemoryEnabled $false
	I0210 10:57:22.733029    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:22.733346    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:22.733462    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100 -Count 2
	I0210 10:57:24.715450    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:24.715450    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:24.716089    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\boot2docker.iso'
	I0210 10:57:26.981675    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:26.981747    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:26.981747    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\disk.vhd'
	I0210 10:57:29.347156    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:29.347389    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:29.347389    8716 main.go:141] libmachine: Starting VM...
	I0210 10:57:29.347517    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100
	I0210 10:57:32.230804    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:32.231308    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:32.231361    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 10:57:32.231361    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:34.318051    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:34.318096    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:34.318149    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:36.595079    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:36.595079    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:37.595923    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:39.570719    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:39.570803    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:39.570803    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:41.849171    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:41.849388    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:42.849933    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:44.889336    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:47.116622    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:47.116622    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:48.117607    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:50.090626    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:50.090626    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:50.091211    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:52.361772    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 10:57:52.361772    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:53.362207    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:55.360465    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:57.763439    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:57:59.763320    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:57:59.763320    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:57:59.763593    8716 machine.go:93] provisionDockerMachine start ...
	I0210 10:57:59.763696    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:01.754336    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:01.755200    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:01.755282    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:04.135002    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:04.135987    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:04.142090    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:04.159249    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:04.159249    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 10:58:04.305556    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 10:58:04.305556    8716 buildroot.go:166] provisioning hostname "ha-335100"
	I0210 10:58:04.305556    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:06.268174    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:06.268174    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:06.268921    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:08.612126    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:08.612370    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:08.619254    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:08.619980    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:08.619980    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100 && echo "ha-335100" | sudo tee /etc/hostname
	I0210 10:58:08.782655    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100
	
	I0210 10:58:08.782750    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:10.722065    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:13.065985    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:13.066493    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:13.069979    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:13.069979    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:13.069979    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 10:58:13.229041    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 10:58:13.229103    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 10:58:13.229143    8716 buildroot.go:174] setting up certificates
	I0210 10:58:13.229143    8716 provision.go:84] configureAuth start
	I0210 10:58:13.229219    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:15.201499    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:15.201499    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:15.202522    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:17.525224    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:19.535277    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:19.535677    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:19.535731    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:21.955772    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:21.956091    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:21.956091    8716 provision.go:143] copyHostCerts
	I0210 10:58:21.956091    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 10:58:21.956091    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 10:58:21.956091    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 10:58:21.956744    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 10:58:21.957356    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 10:58:21.957912    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 10:58:21.957912    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 10:58:21.957912    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 10:58:21.959192    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 10:58:21.959192    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 10:58:21.959192    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 10:58:21.959192    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 10:58:21.960468    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100 san=[127.0.0.1 172.29.136.99 ha-335100 localhost minikube]
	I0210 10:58:22.168319    8716 provision.go:177] copyRemoteCerts
	I0210 10:58:22.176962    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 10:58:22.177039    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:24.217624    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:24.218257    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:24.218291    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:26.545431    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:26.545431    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:26.546605    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:58:26.655565    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4784748s)
	I0210 10:58:26.655565    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 10:58:26.655565    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 10:58:26.708753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 10:58:26.709156    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0210 10:58:26.762611    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 10:58:26.762783    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 10:58:26.806257    8716 provision.go:87] duration metric: took 13.576959s to configureAuth
	I0210 10:58:26.806257    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 10:58:26.806257    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:58:26.806257    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:28.790480    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:28.791202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:28.791202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:31.141626    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:31.141626    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:31.146135    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:31.146593    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:31.146593    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 10:58:31.289137    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 10:58:31.289137    8716 buildroot.go:70] root file system type: tmpfs
	I0210 10:58:31.289675    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 10:58:31.289765    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:33.272853    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:33.272853    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:33.273190    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:35.641959    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:35.641959    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:35.646663    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:35.647281    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:35.647281    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 10:58:35.808375    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 10:58:35.808375    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:37.757178    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:40.099084    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:40.099084    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:40.105034    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:40.105645    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:40.105645    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 10:58:42.309359    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 10:58:42.309359    8716 machine.go:96] duration metric: took 42.5452806s to provisionDockerMachine
	I0210 10:58:42.309359    8716 client.go:171] duration metric: took 1m47.1324811s to LocalClient.Create
	I0210 10:58:42.309359    8716 start.go:167] duration metric: took 1m47.1325532s to libmachine.API.Create "ha-335100"
	I0210 10:58:42.309879    8716 start.go:293] postStartSetup for "ha-335100" (driver="hyperv")
	I0210 10:58:42.309879    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 10:58:42.318476    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 10:58:42.318476    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:44.285506    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:44.285506    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:44.286286    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:46.621176    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:46.621176    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:46.622299    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:58:46.736316    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4177896s)
	I0210 10:58:46.745414    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 10:58:46.752589    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 10:58:46.752589    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 10:58:46.753118    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 10:58:46.753271    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 10:58:46.753271    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 10:58:46.761722    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 10:58:46.779300    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 10:58:46.823526    8716 start.go:296] duration metric: took 4.5135954s for postStartSetup
	I0210 10:58:46.825929    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:48.795567    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:48.795567    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:48.796625    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:51.153437    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:51.153437    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:51.153437    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:58:51.155836    8716 start.go:128] duration metric: took 1m55.9819985s to createHost
	I0210 10:58:51.156415    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:53.144379    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:53.144379    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:53.144461    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:58:55.503343    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:58:55.504211    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:55.509406    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:58:55.510018    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:58:55.510018    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 10:58:55.644975    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185135.659116284
	
	I0210 10:58:55.645087    8716 fix.go:216] guest clock: 1739185135.659116284
	I0210 10:58:55.645087    8716 fix.go:229] Guest: 2025-02-10 10:58:55.659116284 +0000 UTC Remote: 2025-02-10 10:58:51.1563566 +0000 UTC m=+121.275254101 (delta=4.502759684s)
	I0210 10:58:55.645195    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:58:57.621790    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:00.038209    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:00.038209    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:00.043112    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 10:59:00.043526    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.99 22 <nil> <nil>}
	I0210 10:59:00.043526    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185135
	I0210 10:59:00.194828    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 10:58:55 UTC 2025
	
	I0210 10:59:00.194943    8716 fix.go:236] clock set: Mon Feb 10 10:58:55 UTC 2025
	 (err=<nil>)
	I0210 10:59:00.194943    8716 start.go:83] releasing machines lock for "ha-335100", held for 2m5.0211267s
	I0210 10:59:00.195132    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:02.189954    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:02.190221    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:02.190292    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:04.625803    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:04.626817    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:04.630378    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 10:59:04.630570    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:04.641782    8716 ssh_runner.go:195] Run: cat /version.json
	I0210 10:59:04.641842    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:06.689031    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:06.689860    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:06.689922    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:06.690696    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:09.162719    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:09.162719    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:09.163624    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:09.182845    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:09.182845    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:09.183060    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:09.256683    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6261945s)
	W0210 10:59:09.256683    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 10:59:09.289896    8716 ssh_runner.go:235] Completed: cat /version.json: (4.6479418s)
	I0210 10:59:09.298633    8716 ssh_runner.go:195] Run: systemctl --version
	I0210 10:59:09.316283    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0210 10:59:09.326007    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 10:59:09.333609    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 10:59:09.364481    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 10:59:09.364481    8716 start.go:495] detecting cgroup driver to use...
	I0210 10:59:09.364481    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0210 10:59:09.379663    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 10:59:09.379663    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 10:59:09.410962    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 10:59:09.438456    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 10:59:09.456894    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 10:59:09.467067    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 10:59:09.494400    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 10:59:09.522660    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 10:59:09.555431    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 10:59:09.591737    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 10:59:09.626140    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 10:59:09.652135    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 10:59:09.680515    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 10:59:09.709577    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 10:59:09.727534    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 10:59:09.736588    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 10:59:09.766002    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 10:59:09.790776    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:09.990519    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 10:59:10.023672    8716 start.go:495] detecting cgroup driver to use...
	I0210 10:59:10.033966    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 10:59:10.067941    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 10:59:10.097652    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 10:59:10.130951    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 10:59:10.163679    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 10:59:10.195416    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 10:59:10.257619    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 10:59:10.281530    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 10:59:10.325653    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 10:59:10.340042    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 10:59:10.357964    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 10:59:10.397324    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 10:59:10.581403    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 10:59:10.766445    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 10:59:10.766445    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 10:59:10.808095    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:10.991998    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 10:59:13.584046    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5920187s)
	I0210 10:59:13.594010    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 10:59:13.628419    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 10:59:13.663030    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 10:59:13.866975    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 10:59:14.076058    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:14.274741    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 10:59:14.312741    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 10:59:14.345477    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:14.533023    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 10:59:14.639593    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 10:59:14.652087    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 10:59:14.660866    8716 start.go:563] Will wait 60s for crictl version
	I0210 10:59:14.669372    8716 ssh_runner.go:195] Run: which crictl
	I0210 10:59:14.683464    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 10:59:14.733295    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 10:59:14.741350    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 10:59:14.788195    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 10:59:14.825672    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 10:59:14.825827    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 10:59:14.830176    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 10:59:14.833774    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 10:59:14.833774    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 10:59:14.841599    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 10:59:14.849025    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:59:14.881251    8716 kubeadm.go:883] updating cluster {Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP
:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 10:59:14.881251    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:59:14.888252    8716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 10:59:14.912763    8716 docker.go:689] Got preloaded images: 
	I0210 10:59:14.912848    8716 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0210 10:59:14.922086    8716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 10:59:14.948910    8716 ssh_runner.go:195] Run: which lz4
	I0210 10:59:14.954626    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0210 10:59:14.962808    8716 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 10:59:14.969016    8716 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 10:59:14.969016    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0210 10:59:16.584956    8716 docker.go:653] duration metric: took 1.630006s to copy over tarball
	I0210 10:59:16.594103    8716 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 10:59:24.865471    8716 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.2712735s)
	I0210 10:59:24.865471    8716 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 10:59:24.927579    8716 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 10:59:24.946616    8716 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0210 10:59:24.986964    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:25.198136    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 10:59:28.508921    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3107475s)
	I0210 10:59:28.517317    8716 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 10:59:28.543793    8716 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0210 10:59:28.543793    8716 cache_images.go:84] Images are preloaded, skipping loading
	I0210 10:59:28.543793    8716 kubeadm.go:934] updating node { 172.29.136.99 8443 v1.32.1 docker true true} ...
	I0210 10:59:28.543793    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.136.99
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 10:59:28.550794    8716 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0210 10:59:28.614933    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:59:28.614933    8716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 10:59:28.614933    8716 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 10:59:28.614933    8716 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.136.99 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-335100 NodeName:ha-335100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.136.99"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.136.99 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 10:59:28.614933    8716 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.136.99
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-335100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.136.99"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.136.99"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 10:59:28.615466    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 10:59:28.623495    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 10:59:28.650192    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 10:59:28.650192    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0210 10:59:28.658132    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 10:59:28.681036    8716 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 10:59:28.689002    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0210 10:59:28.706362    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0210 10:59:28.735662    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 10:59:28.765214    8716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0210 10:59:28.793232    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0210 10:59:28.835901    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 10:59:28.842208    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 10:59:28.870270    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 10:59:29.058434    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 10:59:29.085471    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.136.99
	I0210 10:59:29.085471    8716 certs.go:194] generating shared ca certs ...
	I0210 10:59:29.085471    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.086955    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 10:59:29.087327    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 10:59:29.087492    8716 certs.go:256] generating profile certs ...
	I0210 10:59:29.088023    8716 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 10:59:29.088099    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt with IP's: []
	I0210 10:59:29.271791    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt ...
	I0210 10:59:29.271791    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.crt: {Name:mk5216f38f20912ed6052b5430faea59399f3f31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.272789    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key ...
	I0210 10:59:29.272789    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key: {Name:mkd7b13c25fea812fc08569e68f3133c2241e105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.273735    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d
	I0210 10:59:29.273735    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.143.254]
	I0210 10:59:29.583944    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d ...
	I0210 10:59:29.583944    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d: {Name:mk23d7e42777d012abc45260df0ae3e0638e6bb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.585043    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d ...
	I0210 10:59:29.585043    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d: {Name:mk7a397d6294b60b358f9a417a41bf9963d738ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:29.587055    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.5b72dc9d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 10:59:29.606612    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.5b72dc9d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 10:59:29.607755    8716 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 10:59:29.607861    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt with IP's: []
	I0210 10:59:30.014813    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt ...
	I0210 10:59:30.014813    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt: {Name:mk6598afd57f2b469b6b403a769e5e456fdaf7e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:30.015813    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key ...
	I0210 10:59:30.015813    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key: {Name:mk1e06dc41c38271ae1612b06e338f09efab9113 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:30.017248    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 10:59:30.018045    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 10:59:30.018045    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 10:59:30.018450    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 10:59:30.018634    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 10:59:30.018836    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 10:59:30.018836    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 10:59:30.032196    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 10:59:30.032812    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 10:59:30.033333    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 10:59:30.033392    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 10:59:30.033973    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 10:59:30.033973    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 10:59:30.033973    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.034554    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.034554    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.035133    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 10:59:30.086775    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 10:59:30.126026    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 10:59:30.172847    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 10:59:30.217732    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 10:59:30.263137    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 10:59:30.306701    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 10:59:30.353203    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 10:59:30.399314    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 10:59:30.444062    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 10:59:30.489445    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 10:59:30.533890    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 10:59:30.572483    8716 ssh_runner.go:195] Run: openssl version
	I0210 10:59:30.589313    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 10:59:30.622759    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.630101    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.637366    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 10:59:30.655319    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 10:59:30.684292    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 10:59:30.712971    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.720207    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.727833    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 10:59:30.744631    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 10:59:30.771287    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 10:59:30.801361    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.808980    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.817908    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 10:59:30.836455    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 10:59:30.864607    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 10:59:30.871993    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 10:59:30.872135    8716 kubeadm.go:392] StartCluster: {Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:17
2.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:59:30.879497    8716 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 10:59:30.922629    8716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 10:59:30.949375    8716 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 10:59:30.977246    8716 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 10:59:30.994169    8716 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 10:59:30.994169    8716 kubeadm.go:157] found existing configuration files:
	
	I0210 10:59:31.003103    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 10:59:31.019342    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 10:59:31.028120    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 10:59:31.057235    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 10:59:31.074298    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 10:59:31.084160    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 10:59:31.110937    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 10:59:31.128495    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 10:59:31.136478    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 10:59:31.162126    8716 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 10:59:31.179317    8716 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 10:59:31.187189    8716 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 10:59:31.204912    8716 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 10:59:31.592529    8716 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 10:59:45.973286    8716 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 10:59:45.973433    8716 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 10:59:45.973603    8716 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 10:59:45.973751    8716 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 10:59:45.973989    8716 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 10:59:45.974211    8716 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 10:59:45.978729    8716 out.go:235]   - Generating certificates and keys ...
	I0210 10:59:45.978729    8716 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 10:59:45.978729    8716 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 10:59:45.979396    8716 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-335100 localhost] and IPs [172.29.136.99 127.0.0.1 ::1]
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 10:59:45.980075    8716 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-335100 localhost] and IPs [172.29.136.99 127.0.0.1 ::1]
	I0210 10:59:45.980609    8716 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 10:59:45.980676    8716 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 10:59:45.981304    8716 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 10:59:45.981917    8716 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 10:59:45.986098    8716 out.go:235]   - Booting up control plane ...
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 10:59:45.987100    8716 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 10:59:45.988226    8716 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 10:59:45.988374    8716 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 10:59:45.988582    8716 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001790189s
	I0210 10:59:45.988716    8716 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 10:59:45.988876    8716 kubeadm.go:310] [api-check] The API server is healthy after 7.00287979s
	I0210 10:59:45.989027    8716 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 10:59:45.989236    8716 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 10:59:45.989467    8716 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 10:59:45.989659    8716 kubeadm.go:310] [mark-control-plane] Marking the node ha-335100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 10:59:45.989990    8716 kubeadm.go:310] [bootstrap-token] Using token: 5bp9g0.cru7k30qiv98fcl0
	I0210 10:59:45.998281    8716 out.go:235]   - Configuring RBAC rules ...
	I0210 10:59:45.999094    8716 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 10:59:45.999230    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 10:59:45.999230    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 10:59:45.999810    8716 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 10:59:46.000412    8716 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 10:59:46.000522    8716 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 10:59:46.000522    8716 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.000522    8716 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.000522    8716 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 10:59:46.000522    8716 kubeadm.go:310] 
	I0210 10:59:46.001044    8716 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 10:59:46.001106    8716 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 10:59:46.001106    8716 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001106    8716 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001106    8716 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 10:59:46.001106    8716 kubeadm.go:310] 
	I0210 10:59:46.001627    8716 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 10:59:46.001790    8716 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 10:59:46.001790    8716 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 10:59:46.001790    8716 kubeadm.go:310] 
	I0210 10:59:46.001790    8716 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 10:59:46.001790    8716 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 10:59:46.001790    8716 kubeadm.go:310] 
	I0210 10:59:46.002384    8716 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5bp9g0.cru7k30qiv98fcl0 \
	I0210 10:59:46.002384    8716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 \
	I0210 10:59:46.002384    8716 kubeadm.go:310] 	--control-plane 
	I0210 10:59:46.002384    8716 kubeadm.go:310] 
	I0210 10:59:46.003055    8716 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 10:59:46.003055    8716 kubeadm.go:310] 
	I0210 10:59:46.003177    8716 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5bp9g0.cru7k30qiv98fcl0 \
	I0210 10:59:46.003177    8716 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 10:59:46.003177    8716 cni.go:84] Creating CNI manager for ""
	I0210 10:59:46.003177    8716 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 10:59:46.011410    8716 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 10:59:46.026147    8716 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 10:59:46.034732    8716 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 10:59:46.034852    8716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 10:59:46.082636    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 10:59:46.744093    8716 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 10:59:46.753093    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100 minikube.k8s.io/updated_at=2025_02_10T10_59_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=true
	I0210 10:59:46.754094    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:46.764122    8716 ops.go:34] apiserver oom_adj: -16
	I0210 10:59:46.970456    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:47.473223    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:47.973417    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:48.472784    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:48.970999    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:49.472979    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:49.973094    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:50.471125    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 10:59:50.758721    8716 kubeadm.go:1113] duration metric: took 4.0145828s to wait for elevateKubeSystemPrivileges
	I0210 10:59:50.758721    8716 kubeadm.go:394] duration metric: took 19.8863595s to StartCluster
	I0210 10:59:50.758721    8716 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:50.758721    8716 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:59:50.760831    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:59:50.762126    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 10:59:50.762126    8716 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:59:50.762233    8716 start.go:241] waiting for startup goroutines ...
	I0210 10:59:50.762233    8716 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 10:59:50.762571    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:59:50.762771    8716 addons.go:69] Setting default-storageclass=true in profile "ha-335100"
	I0210 10:59:50.762842    8716 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-335100"
	I0210 10:59:50.763105    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:50.763105    8716 addons.go:69] Setting storage-provisioner=true in profile "ha-335100"
	I0210 10:59:50.763105    8716 addons.go:238] Setting addon storage-provisioner=true in "ha-335100"
	I0210 10:59:50.763735    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 10:59:50.765784    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:50.917208    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 10:59:51.208172    8716 start.go:971] {"host.minikube.internal": 172.29.128.1} host record injected into CoreDNS's ConfigMap
	I0210 10:59:52.827270    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:52.827370    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:52.829975    8716 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 10:59:52.832541    8716 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:59:52.832573    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 10:59:52.832677    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:52.841454    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:52.841454    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:52.843380    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:59:52.844066    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 10:59:52.845970    8716 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 10:59:52.845970    8716 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 10:59:52.845970    8716 addons.go:238] Setting addon default-storageclass=true in "ha-335100"
	I0210 10:59:52.845970    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 10:59:52.846968    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:54.996514    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:54.996514    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:54.996717    8716 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 10:59:54.996780    8716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 10:59:54.996780    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 10:59:55.024215    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:55.024215    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:55.025026    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:57.043087    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 10:59:57.043233    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:57.043287    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 10:59:57.482918    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:57.482918    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:57.483918    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:57.628001    8716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 10:59:59.477460    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 10:59:59.477460    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 10:59:59.477819    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 10:59:59.616409    8716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 10:59:59.824967    8716 round_trippers.go:470] GET https://172.29.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0210 10:59:59.825005    8716 round_trippers.go:476] Request Headers:
	I0210 10:59:59.825043    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 10:59:59.825043    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 10:59:59.837942    8716 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0210 10:59:59.839309    8716 round_trippers.go:470] PUT https://172.29.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0210 10:59:59.839309    8716 round_trippers.go:476] Request Headers:
	I0210 10:59:59.839404    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 10:59:59.839404    8716 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 10:59:59.839404    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 10:59:59.844121    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 10:59:59.847996    8716 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 10:59:59.850163    8716 addons.go:514] duration metric: took 9.0878266s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 10:59:59.850696    8716 start.go:246] waiting for cluster config update ...
	I0210 10:59:59.850696    8716 start.go:255] writing updated cluster config ...
	I0210 10:59:59.852666    8716 out.go:201] 
	I0210 10:59:59.870702    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:59:59.870876    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:59:59.878569    8716 out.go:177] * Starting "ha-335100-m02" control-plane node in "ha-335100" cluster
	I0210 10:59:59.881789    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 10:59:59.881789    8716 cache.go:56] Caching tarball of preloaded images
	I0210 10:59:59.882124    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 10:59:59.882124    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 10:59:59.882124    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 10:59:59.889690    8716 start.go:360] acquireMachinesLock for ha-335100-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 10:59:59.889690    8716 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-335100-m02"
	I0210 10:59:59.889690    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 10:59:59.889690    8716 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0210 10:59:59.891887    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 10:59:59.893000    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 10:59:59.893000    8716 client.go:168] LocalClient.Create starting
	I0210 10:59:59.893692    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 10:59:59.893854    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:59:59.893854    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:59:59.894033    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 10:59:59.894230    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 10:59:59.894230    8716 main.go:141] libmachine: Parsing certificate...
	I0210 10:59:59.894230    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:01.699114    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:00:03.390533    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:00:03.390533    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:03.391287    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:00:04.825889    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:00:04.826140    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:04.826217    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:00:08.295330    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:00:08.295330    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:08.298237    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:00:08.708888    8716 main.go:141] libmachine: Creating SSH key...
	I0210 11:00:08.835827    8716 main.go:141] libmachine: Creating VM...
	I0210 11:00:08.835827    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:11.513110    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:00:11.513110    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:00:13.161706    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:00:13.161964    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:13.161964    8716 main.go:141] libmachine: Creating VHD
	I0210 11:00:13.161964    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 11:00:16.827086    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : C04A6B50-88FA-4BFC-8917-C96CE972A647
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 11:00:16.827086    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:16.827223    8716 main.go:141] libmachine: Writing magic tar header
	I0210 11:00:16.827223    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 11:00:16.840220    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 11:00:19.886494    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:19.886494    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:19.886576    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd' -SizeBytes 20000MB
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:22.299448    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 11:00:25.701436    8716 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-335100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 11:00:25.701436    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:25.701875    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100-m02 -DynamicMemoryEnabled $false
	I0210 11:00:27.785939    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:27.785939    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:27.786019    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100-m02 -Count 2
	I0210 11:00:29.835907    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:29.836288    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:29.836288    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\boot2docker.iso'
	I0210 11:00:32.208441    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:32.208441    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:32.208514    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\disk.vhd'
	I0210 11:00:34.690203    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:34.690782    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:34.690782    8716 main.go:141] libmachine: Starting VM...
	I0210 11:00:34.690782    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m02
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:37.534937    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 11:00:37.534937    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:39.626047    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:39.626584    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:39.626584    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:41.916910    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:41.916910    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:42.918554    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:44.959711    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:44.959711    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:44.960141    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:47.227989    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:47.228891    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:48.229900    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:50.198780    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:50.198849    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:50.198943    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:52.489179    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:52.489179    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:53.490010    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:00:55.509448    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:00:55.509448    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:55.509593    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:00:57.787656    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:00:57.788718    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:00:58.789424    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:00.794681    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:00.794681    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:00.794869    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:03.170642    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:03.170642    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:03.171446    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:05.170366    8716 machine.go:93] provisionDockerMachine start ...
	I0210 11:01:05.170366    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:07.132675    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:07.132737    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:07.132737    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:09.469603    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:09.470123    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:09.474897    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:09.491952    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:09.492033    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:01:09.617282    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:01:09.617346    8716 buildroot.go:166] provisioning hostname "ha-335100-m02"
	I0210 11:01:09.617413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:11.539143    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:13.861001    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:13.861001    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:13.865451    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:13.865882    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:13.865882    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100-m02 && echo "ha-335100-m02" | sudo tee /etc/hostname
	I0210 11:01:14.022160    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m02
	
	I0210 11:01:14.022160    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:15.960632    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:15.960632    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:15.961808    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:18.264563    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:18.264563    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:18.269331    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:18.269815    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:18.269873    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:01:18.399387    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:01:18.399387    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:01:18.399387    8716 buildroot.go:174] setting up certificates
	I0210 11:01:18.399387    8716 provision.go:84] configureAuth start
	I0210 11:01:18.400212    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:20.353105    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:20.353482    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:20.353482    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:22.675105    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:22.675202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:22.675202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:24.621184    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:26.967797    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:26.967797    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:26.967797    8716 provision.go:143] copyHostCerts
	I0210 11:01:26.967797    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:01:26.967797    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:01:26.967797    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:01:26.968379    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:01:26.969077    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:01:26.969077    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:01:26.969077    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:01:26.969700    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:01:26.970550    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:01:26.970670    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:01:26.970670    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:01:26.970670    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:01:26.971537    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m02 san=[127.0.0.1 172.29.139.212 ha-335100-m02 localhost minikube]
	I0210 11:01:27.041298    8716 provision.go:177] copyRemoteCerts
	I0210 11:01:27.049742    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:01:27.049742    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:29.034425    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:31.431100    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:31.431100    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:31.431567    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:01:31.533502    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4837087s)
	I0210 11:01:31.533502    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:01:31.533502    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:01:31.579158    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:01:31.579158    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:01:31.625124    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:01:31.625559    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:01:31.669704    8716 provision.go:87] duration metric: took 13.2701655s to configureAuth
	I0210 11:01:31.669782    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:01:31.670349    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:01:31.670413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:33.626886    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:33.627061    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:33.627061    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:35.947985    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:35.948215    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:35.954043    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:35.954043    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:35.954043    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:01:36.085121    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:01:36.085121    8716 buildroot.go:70] root file system type: tmpfs
	I0210 11:01:36.085329    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:01:36.085431    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:38.029884    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:38.029884    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:38.029957    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:40.334860    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:40.334860    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:40.339753    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:40.340124    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:40.340208    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.136.99"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:01:40.489637    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.136.99
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:01:40.489754    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:42.449900    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:42.450419    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:42.450419    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:44.784904    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:44.784973    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:44.789241    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:01:44.789724    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:01:44.789789    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:01:46.970328    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:01:46.970328    8716 machine.go:96] duration metric: took 41.7994851s to provisionDockerMachine
	I0210 11:01:46.970328    8716 client.go:171] duration metric: took 1m47.0761072s to LocalClient.Create
	I0210 11:01:46.970328    8716 start.go:167] duration metric: took 1m47.0761072s to libmachine.API.Create "ha-335100"
	I0210 11:01:46.970328    8716 start.go:293] postStartSetup for "ha-335100-m02" (driver="hyperv")
	I0210 11:01:46.970328    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:01:46.981275    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:01:46.981275    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:48.926581    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:48.926581    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:48.927472    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:51.248947    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:51.249888    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:51.250373    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:01:51.351125    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3697162s)
	I0210 11:01:51.360558    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:01:51.367670    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:01:51.367670    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 11:01:51.367982    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 11:01:51.368535    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 11:01:51.368608    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 11:01:51.376762    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:01:51.394243    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 11:01:51.440418    8716 start.go:296] duration metric: took 4.4700386s for postStartSetup
	I0210 11:01:51.443126    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:53.440440    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:01:55.768896    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:01:55.768896    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:55.769237    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:01:55.771058    8716 start.go:128] duration metric: took 1m55.8800464s to createHost
	I0210 11:01:55.771058    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:01:57.700253    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:01:57.700799    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:01:57.700891    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:00.039909    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:00.039909    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:00.044236    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:02:00.044645    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:02:00.044645    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:02:00.167124    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185320.168707328
	
	I0210 11:02:00.167227    8716 fix.go:216] guest clock: 1739185320.168707328
	I0210 11:02:00.167227    8716 fix.go:229] Guest: 2025-02-10 11:02:00.168707328 +0000 UTC Remote: 2025-02-10 11:01:55.7710581 +0000 UTC m=+305.887850501 (delta=4.397649228s)
	I0210 11:02:00.167227    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:02.153518    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:04.470036    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:04.471102    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:04.478888    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:02:04.479484    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.139.212 22 <nil> <nil>}
	I0210 11:02:04.479484    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185320
	I0210 11:02:04.617793    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 11:02:00 UTC 2025
	
	I0210 11:02:04.617860    8716 fix.go:236] clock set: Mon Feb 10 11:02:00 UTC 2025
	 (err=<nil>)
	I0210 11:02:04.617860    8716 start.go:83] releasing machines lock for "ha-335100-m02", held for 2m4.7267469s
	I0210 11:02:04.617927    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:06.578392    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:06.579361    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:06.579361    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:08.899139    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:08.899139    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:08.902892    8716 out.go:177] * Found network options:
	I0210 11:02:08.905912    8716 out.go:177]   - NO_PROXY=172.29.136.99
	W0210 11:02:08.907409    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:02:08.910037    8716 out.go:177]   - NO_PROXY=172.29.136.99
	W0210 11:02:08.912141    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:02:08.913286    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:02:08.915397    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 11:02:08.915462    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:08.921393    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:02:08.921393    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:02:10.859711    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:10.860158    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:10.860158    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:10.885515    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:13.273340    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:13.273340    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:13.274577    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:02:13.298712    8716 main.go:141] libmachine: [stdout =====>] : 172.29.139.212
	
	I0210 11:02:13.298819    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:13.299186    8716 sshutil.go:53] new ssh client: &{IP:172.29.139.212 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m02\id_rsa Username:docker}
	I0210 11:02:13.368144    8716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4466995s)
	W0210 11:02:13.368226    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:02:13.376237    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:02:13.381907    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4663934s)
	W0210 11:02:13.381984    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 11:02:13.409246    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:02:13.409246    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:02:13.409246    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:02:13.453127    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 11:02:13.471111    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 11:02:13.471261    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 11:02:13.484197    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:02:13.504321    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:02:13.513799    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:02:13.542123    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:02:13.572213    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:02:13.602594    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:02:13.630093    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:02:13.658122    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:02:13.686231    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:02:13.715695    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:02:13.742863    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:02:13.761467    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:02:13.770510    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:02:13.807411    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:02:13.838963    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:14.021853    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:02:14.056814    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:02:14.065818    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:02:14.099416    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:02:14.127804    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:02:14.164300    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:02:14.195269    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:02:14.227292    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:02:14.287413    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:02:14.310609    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:02:14.354206    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:02:14.368335    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:02:14.384860    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:02:14.424674    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:02:14.622004    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:02:14.810797    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:02:14.810940    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:02:14.850103    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:15.040191    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:02:17.620616    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5803953s)
	I0210 11:02:17.629553    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 11:02:17.661504    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:02:17.692500    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 11:02:17.878833    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 11:02:18.070609    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:18.270940    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 11:02:18.307647    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:02:18.342442    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:18.530245    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 11:02:18.631776    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 11:02:18.640773    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 11:02:18.649619    8716 start.go:563] Will wait 60s for crictl version
	I0210 11:02:18.658384    8716 ssh_runner.go:195] Run: which crictl
	I0210 11:02:18.672692    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:02:18.735821    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 11:02:18.742825    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:02:18.786756    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:02:18.820953    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 11:02:18.824722    8716 out.go:177]   - env NO_PROXY=172.29.136.99
	I0210 11:02:18.826709    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 11:02:18.829696    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 11:02:18.830696    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 11:02:18.832698    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 11:02:18.832698    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 11:02:18.841020    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 11:02:18.847456    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:02:18.869084    8716 mustload.go:65] Loading cluster: ha-335100
	I0210 11:02:18.869721    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:02:18.870511    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:20.797037    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:20.797037    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:20.797037    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:02:20.797923    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.139.212
	I0210 11:02:20.797997    8716 certs.go:194] generating shared ca certs ...
	I0210 11:02:20.797997    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.798259    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 11:02:20.798875    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 11:02:20.798875    8716 certs.go:256] generating profile certs ...
	I0210 11:02:20.799494    8716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 11:02:20.799593    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5
	I0210 11:02:20.799676    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.139.212 172.29.143.254]
	I0210 11:02:20.958401    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 ...
	I0210 11:02:20.958401    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5: {Name:mk82cfde7602081e3f5ad03699e241ce1d0a9ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.959541    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5 ...
	I0210 11:02:20.960550    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5: {Name:mk13bc1ebe7613f673c88f9bec73e4d38c972417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:02:20.960789    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.b5605ac5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 11:02:20.977980    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.b5605ac5 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 11:02:20.978663    8716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 11:02:20.978663    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 11:02:20.979210    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 11:02:20.979386    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 11:02:20.979483    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 11:02:20.979758    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 11:02:20.980387    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 11:02:20.980521    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 11:02:20.980521    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 11:02:20.981041    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 11:02:20.981294    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 11:02:20.981484    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 11:02:20.981707    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 11:02:20.982048    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 11:02:20.982233    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 11:02:20.982233    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:22.925555    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:22.925555    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:22.925636    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:25.302582    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:02:25.302582    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:25.302582    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:02:25.398014    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0210 11:02:25.406219    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0210 11:02:25.433854    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0210 11:02:25.440242    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0210 11:02:25.467133    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0210 11:02:25.474577    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0210 11:02:25.501232    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0210 11:02:25.508498    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0210 11:02:25.537855    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0210 11:02:25.550526    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0210 11:02:25.580257    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0210 11:02:25.587090    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0210 11:02:25.608202    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:02:25.655072    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:02:25.701471    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:02:25.747891    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:02:25.792087    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0210 11:02:25.837433    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 11:02:25.884923    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:02:25.929037    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:02:25.973387    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:02:26.017365    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 11:02:26.061423    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 11:02:26.106403    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0210 11:02:26.137699    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0210 11:02:26.173512    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0210 11:02:26.204250    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0210 11:02:26.233742    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0210 11:02:26.263263    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0210 11:02:26.297428    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0210 11:02:26.336228    8716 ssh_runner.go:195] Run: openssl version
	I0210 11:02:26.352088    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 11:02:26.380073    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.387017    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.394490    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 11:02:26.415656    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 11:02:26.444704    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 11:02:26.471507    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.479380    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.487756    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 11:02:26.504917    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:02:26.534028    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:02:26.560253    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.567133    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.576071    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:02:26.592019    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:02:26.618436    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:02:26.624985    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:02:26.625556    8716 kubeadm.go:934] updating node {m02 172.29.139.212 8443 v1.32.1 docker true true} ...
	I0210 11:02:26.625703    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.139.212
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:02:26.625740    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 11:02:26.633935    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 11:02:26.660051    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 11:02:26.660051    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0210 11:02:26.669822    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:02:26.684337    8716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0210 11:02:26.693291    8716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0210 11:02:26.714179    8716 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0210 11:02:27.770020    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:02:27.780672    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:02:27.787910    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 11:02:27.787910    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0210 11:02:27.872519    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:02:27.880459    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:02:27.917581    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 11:02:27.917581    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0210 11:02:27.967579    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:02:28.036199    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:02:28.043257    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:02:28.066642    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 11:02:28.067749    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0210 11:02:28.966573    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0210 11:02:28.984576    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0210 11:02:29.016695    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:02:29.049043    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0210 11:02:29.098199    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 11:02:29.104801    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:02:29.137742    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:02:29.343297    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:02:29.370828    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:02:29.371612    8716 start.go:317] joinCluster: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.
29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\j
enkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:02:29.371612    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 11:02:29.371612    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:02:31.319574    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:02:31.319574    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:31.320328    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:02:33.707382    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:02:33.708241    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:02:33.708637    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:02:34.143761    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7720937s)
	I0210 11:02:34.143761    8716 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:02:34.143761    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nn2t6d.ycdpzx2fx9wduepx --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m02 --control-plane --apiserver-advertise-address=172.29.139.212 --apiserver-bind-port=8443"
	I0210 11:03:12.989922    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token nn2t6d.ycdpzx2fx9wduepx --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m02 --control-plane --apiserver-advertise-address=172.29.139.212 --apiserver-bind-port=8443": (38.8457143s)
	I0210 11:03:12.989979    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 11:03:13.797742    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100-m02 minikube.k8s.io/updated_at=2025_02_10T11_03_13_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=false
	I0210 11:03:14.419316    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-335100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0210 11:03:14.674893    8716 start.go:319] duration metric: took 45.3027603s to joinCluster
	I0210 11:03:14.675091    8716 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:03:14.675640    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:03:14.679021    8716 out.go:177] * Verifying Kubernetes components...
	I0210 11:03:14.689823    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:03:15.042793    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:03:15.081446    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:03:15.082258    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0210 11:03:15.082470    8716 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.143.254:8443 with https://172.29.136.99:8443
	I0210 11:03:15.083662    8716 node_ready.go:35] waiting up to 6m0s for node "ha-335100-m02" to be "Ready" ...
	I0210 11:03:15.083963    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:15.083963    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:15.084014    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:15.084014    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:15.107403    8716 round_trippers.go:581] Response Status: 200 OK in 23 milliseconds
	I0210 11:03:15.584470    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:15.584470    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:15.584541    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:15.584541    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:15.589089    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:16.084871    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:16.084871    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:16.084871    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:16.084871    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:16.090688    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:16.584610    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:16.584610    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:16.584610    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:16.584610    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:16.590672    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:17.085070    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:17.085070    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:17.085143    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:17.085143    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:17.089946    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:17.090174    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:17.584624    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:17.584624    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:17.584624    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:17.584624    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:17.589080    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:18.084201    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:18.084201    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:18.084201    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:18.084201    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:18.089811    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:18.584184    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:18.584184    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:18.584184    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:18.584184    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:18.590444    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:19.083886    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:19.083886    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:19.083886    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:19.083886    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:19.099984    8716 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 11:03:19.100297    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:19.584637    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:19.584801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:19.584801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:19.584801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:19.711918    8716 round_trippers.go:581] Response Status: 200 OK in 127 milliseconds
	I0210 11:03:20.085313    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:20.085361    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:20.085395    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:20.085395    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:20.089093    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:20.586032    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:20.586032    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:20.586032    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:20.586032    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:20.590840    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:21.085138    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:21.085624    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:21.085624    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:21.085624    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:21.090408    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:21.585525    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:21.585525    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:21.585525    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:21.585525    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:21.656428    8716 round_trippers.go:581] Response Status: 200 OK in 70 milliseconds
	I0210 11:03:21.656880    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:22.084469    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:22.084469    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:22.084469    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:22.084469    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:22.089095    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:22.585323    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:22.585323    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:22.585323    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:22.585323    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:22.590518    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:23.084284    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:23.084284    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:23.084284    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:23.084284    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:23.094212    8716 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 11:03:23.584852    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:23.584852    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:23.584852    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:23.584852    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:23.590435    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:24.084090    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:24.084090    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:24.084090    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:24.084090    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:24.089425    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:24.089737    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:24.584379    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:24.584379    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:24.584379    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:24.584379    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:24.590303    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:25.084837    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:25.084837    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:25.084837    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:25.084837    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:25.089869    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:25.584224    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:25.584224    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:25.584224    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:25.584224    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:25.589595    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:26.084769    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:26.084769    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:26.084769    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:26.084769    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:26.090444    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:26.090873    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:26.584841    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:26.584841    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:26.584841    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:26.584841    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:26.590426    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:27.084698    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:27.084767    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:27.084767    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:27.084835    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:27.092015    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:27.584463    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:27.584463    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:27.584463    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:27.584463    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:27.589546    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:28.085081    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:28.085081    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:28.085151    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:28.085151    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:28.091006    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:28.091725    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:28.584923    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:28.585165    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:28.585165    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:28.585165    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:28.590222    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:29.084264    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:29.084264    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:29.084264    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:29.084264    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:29.090629    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:29.585143    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:29.585143    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:29.585143    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:29.585143    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:29.590871    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:30.084150    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:30.084150    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:30.084150    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:30.084150    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:30.089960    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:30.584472    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:30.584472    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:30.584472    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:30.584472    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:30.590820    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:30.591569    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:31.084720    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:31.084720    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:31.084720    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:31.084720    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:31.088998    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:31.584830    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:31.584830    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:31.584830    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:31.584830    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:31.591035    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:32.084956    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:32.084956    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:32.084956    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:32.084956    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:32.091516    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:32.584984    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:32.585060    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:32.585132    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:32.585132    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:32.592617    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:32.592617    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:33.084052    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:33.084052    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:33.084052    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:33.084052    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:33.089228    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:33.584438    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:33.584511    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:33.584511    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:33.584511    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:33.591612    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:34.084640    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:34.084640    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:34.084640    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:34.084640    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:34.090474    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:34.584403    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:34.584403    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:34.584403    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:34.584403    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:34.589402    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:35.084798    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:35.084895    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:35.084895    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:35.084895    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:35.090396    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:35.090869    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:35.584406    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:35.584406    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:35.584596    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:35.584596    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:35.589677    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:36.085217    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:36.085217    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:36.085217    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:36.085217    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:36.090998    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:36.584221    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:36.584221    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:36.584221    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:36.584221    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:36.589519    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:37.085168    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:37.085168    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:37.085168    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:37.085168    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:37.090515    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:37.091113    8716 node_ready.go:53] node "ha-335100-m02" has status "Ready":"False"
	I0210 11:03:37.585003    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:37.585003    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:37.585003    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:37.585003    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:37.589995    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:38.084726    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:38.084801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:38.084801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:38.084801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:38.089075    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:38.584461    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:38.584461    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:38.584461    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:38.584461    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:38.590574    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.084801    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.084801    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.084801    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.084801    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.090249    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.090580    8716 node_ready.go:49] node "ha-335100-m02" has status "Ready":"True"
	I0210 11:03:39.090580    8716 node_ready.go:38] duration metric: took 24.0065915s for node "ha-335100-m02" to be "Ready" ...
	I0210 11:03:39.090580    8716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:03:39.091190    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:39.091190    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.091190    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.091190    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.096447    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.098700    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.098818    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-gc5gf
	I0210 11:03:39.098818    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.098875    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.098875    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.103088    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.103088    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.103088    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.103088    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.103088    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.108159    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.108604    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.108604    8716 pod_ready.go:82] duration metric: took 9.9038ms for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.108676    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.108749    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-s44gp
	I0210 11:03:39.108749    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.108749    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.108749    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.112585    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:39.113891    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.113891    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.113891    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.113891    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.120011    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.120967    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.120967    8716 pod_ready.go:82] duration metric: took 12.2913ms for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.120967    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.120967    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100
	I0210 11:03:39.120967    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.120967    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.120967    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.125798    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.126485    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.126485    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.126485    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.126485    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.130767    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.130767    8716 pod_ready.go:93] pod "etcd-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.130767    8716 pod_ready.go:82] duration metric: took 9.7989ms for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.130767    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.130767    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m02
	I0210 11:03:39.130767    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.130767    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.130767    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.135057    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.135372    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.135372    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.135433    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.135433    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.137983    8716 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:03:39.139338    8716 pod_ready.go:93] pod "etcd-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.139396    8716 pod_ready.go:82] duration metric: took 8.6298ms for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.139458    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.285315    8716 request.go:661] Waited for 145.8562ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:03:39.285683    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:03:39.285683    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.285683    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.285683    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.290193    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.484978    8716 request.go:661] Waited for 194.396ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.485588    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:39.485588    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.485588    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.485588    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.491637    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:39.491928    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.491928    8716 pod_ready.go:82] duration metric: took 352.4668ms for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.491928    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.685597    8716 request.go:661] Waited for 193.5123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:03:39.685928    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:03:39.685928    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.685928    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.685928    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.690280    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:39.885628    8716 request.go:661] Waited for 193.9281ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.885928    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:39.885928    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:39.885928    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:39.885928    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:39.890977    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:39.891582    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:39.891680    8716 pod_ready.go:82] duration metric: took 399.7465ms for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:39.891680    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.085293    8716 request.go:661] Waited for 193.6114ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:03:40.085636    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:03:40.085636    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.085636    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.085636    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.090382    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:40.285761    8716 request.go:661] Waited for 194.7964ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:40.285761    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:40.285761    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.286077    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.286077    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.290385    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:40.290955    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:40.290955    8716 pod_ready.go:82] duration metric: took 399.2711ms for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.291196    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.485208    8716 request.go:661] Waited for 193.9559ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:03:40.485607    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:03:40.485607    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.485607    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.485607    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.490181    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:03:40.685664    8716 request.go:661] Waited for 194.9171ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:40.685664    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:40.685664    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.685664    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.685664    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.691524    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:40.692116    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:40.692116    8716 pod_ready.go:82] duration metric: took 400.9151ms for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.692116    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:40.886293    8716 request.go:661] Waited for 194.1751ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:03:40.886549    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:03:40.886549    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:40.886549    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:40.886549    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:40.891986    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.085666    8716 request.go:661] Waited for 193.3132ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:41.085946    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:41.085946    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.085946    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.085946    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.092029    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:41.092660    8716 pod_ready.go:93] pod "kube-proxy-b5xnq" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.092719    8716 pod_ready.go:82] duration metric: took 400.5989ms for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.092719    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.285733    8716 request.go:661] Waited for 192.8939ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:03:41.286074    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:03:41.286074    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.286074    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.286074    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.291770    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:41.484880    8716 request.go:661] Waited for 191.9358ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.484880    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.484880    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.484880    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.484880    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.490213    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:41.490658    8716 pod_ready.go:93] pod "kube-proxy-xzs7w" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.490735    8716 pod_ready.go:82] duration metric: took 398.0109ms for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.490735    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.685605    8716 request.go:661] Waited for 194.7552ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:03:41.685605    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:03:41.685605    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.685605    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.685605    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.691383    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.885005    8716 request.go:661] Waited for 193.1647ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.885501    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:03:41.885584    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:41.885601    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:41.885601    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:41.889779    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:03:41.889779    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:41.889779    8716 pod_ready.go:82] duration metric: took 399.0394ms for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:41.889779    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:42.085552    8716 request.go:661] Waited for 195.7716ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:03:42.085552    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:03:42.085552    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.085552    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.086225    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.093061    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:03:42.285457    8716 request.go:661] Waited for 191.3361ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:42.285457    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:03:42.285872    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.285872    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.285872    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.291332    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:42.291414    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:03:42.291414    8716 pod_ready.go:82] duration metric: took 401.6307ms for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:03:42.291414    8716 pod_ready.go:39] duration metric: took 3.2007969s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:03:42.291414    8716 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:03:42.301418    8716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:03:42.331953    8716 api_server.go:72] duration metric: took 27.6565122s to wait for apiserver process to appear ...
	I0210 11:03:42.332032    8716 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:03:42.332072    8716 api_server.go:253] Checking apiserver healthz at https://172.29.136.99:8443/healthz ...
	I0210 11:03:42.347625    8716 api_server.go:279] https://172.29.136.99:8443/healthz returned 200:
	ok
	I0210 11:03:42.347848    8716 round_trippers.go:470] GET https://172.29.136.99:8443/version
	I0210 11:03:42.347848    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.347848    8716 round_trippers.go:480]     Accept: application/json, */*
	I0210 11:03:42.347848    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.349217    8716 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 11:03:42.349217    8716 api_server.go:141] control plane version: v1.32.1
	I0210 11:03:42.349217    8716 api_server.go:131] duration metric: took 17.1849ms to wait for apiserver health ...
	I0210 11:03:42.349217    8716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:03:42.485619    8716 request.go:661] Waited for 136.4009ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.485619    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.485619    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.485619    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.485619    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.492781    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:42.494306    8716 system_pods.go:59] 17 kube-system pods found
	I0210 11:03:42.494399    8716 system_pods.go:61] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:03:42.494399    8716 system_pods.go:61] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:03:42.494463    8716 system_pods.go:61] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:03:42.494463    8716 system_pods.go:74] duration metric: took 145.2447ms to wait for pod list to return data ...
	I0210 11:03:42.494463    8716 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:03:42.685268    8716 request.go:661] Waited for 190.6375ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:03:42.685588    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:03:42.685588    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.685588    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.685746    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.693148    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:42.693310    8716 default_sa.go:45] found service account: "default"
	I0210 11:03:42.693310    8716 default_sa.go:55] duration metric: took 198.7478ms for default service account to be created ...
	I0210 11:03:42.693310    8716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:03:42.885716    8716 request.go:661] Waited for 192.3222ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.885716    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:03:42.885716    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:42.885716    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:42.885716    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:42.891100    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:03:42.893642    8716 system_pods.go:86] 17 kube-system pods found
	I0210 11:03:42.893723    8716 system_pods.go:89] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:03:42.893723    8716 system_pods.go:89] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:03:42.893831    8716 system_pods.go:89] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:03:42.893831    8716 system_pods.go:126] duration metric: took 200.5191ms to wait for k8s-apps to be running ...
	I0210 11:03:42.893831    8716 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:03:42.904073    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:03:42.928445    8716 system_svc.go:56] duration metric: took 34.6131ms WaitForService to wait for kubelet
	I0210 11:03:42.929240    8716 kubeadm.go:582] duration metric: took 28.2538242s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:03:42.929240    8716 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:03:43.085839    8716 request.go:661] Waited for 156.464ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes
	I0210 11:03:43.085839    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes
	I0210 11:03:43.085839    8716 round_trippers.go:476] Request Headers:
	I0210 11:03:43.085839    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:03:43.085839    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:03:43.092879    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:03:43.093161    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:03:43.093161    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:03:43.093161    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:03:43.093161    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:03:43.093161    8716 node_conditions.go:105] duration metric: took 163.9186ms to run NodePressure ...
	I0210 11:03:43.093161    8716 start.go:241] waiting for startup goroutines ...
	I0210 11:03:43.093161    8716 start.go:255] writing updated cluster config ...
	I0210 11:03:43.097753    8716 out.go:201] 
	I0210 11:03:43.119470    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:03:43.119701    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:03:43.127592    8716 out.go:177] * Starting "ha-335100-m03" control-plane node in "ha-335100" cluster
	I0210 11:03:43.130136    8716 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:03:43.130136    8716 cache.go:56] Caching tarball of preloaded images
	I0210 11:03:43.130586    8716 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:03:43.130779    8716 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:03:43.130779    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:03:43.138939    8716 start.go:360] acquireMachinesLock for ha-335100-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:03:43.140035    8716 start.go:364] duration metric: took 90.8µs to acquireMachinesLock for "ha-335100-m03"
	I0210 11:03:43.140035    8716 start.go:93] Provisioning new machine with config: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:def
ault APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false is
tio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:03:43.140035    8716 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0210 11:03:43.143184    8716 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:03:43.143184    8716 start.go:159] libmachine.API.Create for "ha-335100" (driver="hyperv")
	I0210 11:03:43.144154    8716 client.go:168] LocalClient.Create starting
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 11:03:43.144323    8716 main.go:141] libmachine: Parsing certificate...
	I0210 11:03:43.144849    8716 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 11:03:43.145035    8716 main.go:141] libmachine: Decoding PEM data...
	I0210 11:03:43.145035    8716 main.go:141] libmachine: Parsing certificate...
	I0210 11:03:43.145128    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:44.933418    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:03:46.568133    8716 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:03:46.568133    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:46.569180    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:03:47.936380    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:03:47.936380    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:47.936454    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:03:51.319811    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:03:51.319916    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:51.322246    8716 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:03:51.674809    8716 main.go:141] libmachine: Creating SSH key...
	I0210 11:03:51.901304    8716 main.go:141] libmachine: Creating VM...
	I0210 11:03:51.901304    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:03:54.560734    8716 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:03:54.561327    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:54.561327    8716 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:03:54.561415    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:03:56.194036    8716 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:03:56.194694    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:56.194694    8716 main.go:141] libmachine: Creating VHD
	I0210 11:03:56.194804    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 11:03:59.795854    8716 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 07AB0531-FB35-431D-AEFA-A089C6C41C27
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 11:03:59.795890    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:03:59.795890    8716 main.go:141] libmachine: Writing magic tar header
	I0210 11:03:59.795960    8716 main.go:141] libmachine: Writing SSH key tar header
	I0210 11:03:59.810413    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 11:04:02.827683    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:02.827863    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:02.828093    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd' -SizeBytes 20000MB
	I0210 11:04:05.264459    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:05.265086    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:05.265086    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 11:04:08.622338    8716 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-335100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 11:04:08.622597    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:08.622674    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-335100-m03 -DynamicMemoryEnabled $false
	I0210 11:04:10.686115    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:10.686189    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:10.686266    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-335100-m03 -Count 2
	I0210 11:04:12.679684    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:12.679684    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:12.679772    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\boot2docker.iso'
	I0210 11:04:15.038737    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:15.038794    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:15.038794    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-335100-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\disk.vhd'
	I0210 11:04:17.434704    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:17.434704    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:17.434704    8716 main.go:141] libmachine: Starting VM...
	I0210 11:04:17.435597    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-335100-m03
	I0210 11:04:20.310680    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:20.310721    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:20.310761    8716 main.go:141] libmachine: Waiting for host to start...
	I0210 11:04:20.310761    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:22.396988    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:22.397156    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:22.397156    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:24.722624    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:24.722624    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:25.723405    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:27.715206    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:27.715452    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:27.715452    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:30.022842    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:30.022842    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:31.024639    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:33.005824    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:33.006707    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:33.006776    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:35.319099    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:35.319099    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:36.320043    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:38.318308    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:38.319180    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:38.319180    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:40.588367    8716 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:04:40.588367    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:41.588928    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:43.590375    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:46.015642    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:47.981992    8716 machine.go:93] provisionDockerMachine start ...
	I0210 11:04:47.981992    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:49.947857    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:52.279064    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:52.279064    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:52.282781    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:04:52.299123    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:04:52.299123    8716 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:04:52.431056    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:04:52.431056    8716 buildroot.go:166] provisioning hostname "ha-335100-m03"
	I0210 11:04:52.431139    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:54.438032    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:54.438980    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:54.439058    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:04:56.766560    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:04:56.766560    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:56.770620    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:04:56.770698    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:04:56.770698    8716 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-335100-m03 && echo "ha-335100-m03" | sudo tee /etc/hostname
	I0210 11:04:56.927910    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-335100-m03
	
	I0210 11:04:56.928037    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:04:58.864957    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:04:58.865200    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:04:58.865200    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:01.203483    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:01.203483    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:01.208201    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:01.208867    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:01.208867    8716 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-335100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-335100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-335100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:05:01.359193    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:05:01.359193    8716 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:05:01.359193    8716 buildroot.go:174] setting up certificates
	I0210 11:05:01.359193    8716 provision.go:84] configureAuth start
	I0210 11:05:01.359193    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:03.299782    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:03.300847    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:03.300932    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:05.640929    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:05.640929    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:05.641348    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:07.575558    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:07.575658    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:07.575658    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:09.901808    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:09.901808    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:09.902449    8716 provision.go:143] copyHostCerts
	I0210 11:05:09.902449    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:05:09.902449    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:05:09.902449    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:05:09.903111    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:05:09.903709    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:05:09.903709    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:05:09.903709    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:05:09.904359    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:05:09.904963    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:05:09.904963    8716 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:05:09.904963    8716 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:05:09.905545    8716 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:05:09.906611    8716 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-335100-m03 san=[127.0.0.1 172.29.143.243 ha-335100-m03 localhost minikube]
	I0210 11:05:10.055618    8716 provision.go:177] copyRemoteCerts
	I0210 11:05:10.063620    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:05:10.063620    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:11.994877    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:14.330807    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:14.331010    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:14.331339    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:14.442012    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3783417s)
	I0210 11:05:14.442012    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:05:14.442012    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:05:14.490385    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:05:14.490916    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0210 11:05:14.536774    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:05:14.537491    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 11:05:14.582310    8716 provision.go:87] duration metric: took 13.2229641s to configureAuth
	I0210 11:05:14.582407    8716 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:05:14.582665    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:05:14.582665    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:16.545154    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:16.545652    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:16.545712    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:18.888903    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:18.889185    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:18.892663    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:18.893236    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:18.893236    8716 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:05:19.027685    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:05:19.027685    8716 buildroot.go:70] root file system type: tmpfs
	I0210 11:05:19.027856    8716 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:05:19.027856    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:20.993476    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:20.993476    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:20.994359    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:23.317525    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:23.317525    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:23.321542    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:23.321847    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:23.321847    8716 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.136.99"
	Environment="NO_PROXY=172.29.136.99,172.29.139.212"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:05:23.478779    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.136.99
	Environment=NO_PROXY=172.29.136.99,172.29.139.212
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:05:23.478802    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:25.454405    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:25.454405    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:25.455342    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:27.813915    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:27.813992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:27.818580    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:27.818746    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:27.818746    8716 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:05:30.072003    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:05:30.072097    8716 machine.go:96] duration metric: took 42.0896173s to provisionDockerMachine
	I0210 11:05:30.072132    8716 client.go:171] duration metric: took 1m46.9267382s to LocalClient.Create
	I0210 11:05:30.072132    8716 start.go:167] duration metric: took 1m46.9277075s to libmachine.API.Create "ha-335100"
	I0210 11:05:30.072132    8716 start.go:293] postStartSetup for "ha-335100-m03" (driver="hyperv")
	I0210 11:05:30.072184    8716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:05:30.079843    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:05:30.080796    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:32.071072    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:34.489245    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:34.489245    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:34.489674    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:34.589613    8716 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5086975s)
	I0210 11:05:34.597790    8716 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:05:34.605803    8716 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:05:34.605803    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 11:05:34.606434    8716 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 11:05:34.606641    8716 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 11:05:34.606641    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 11:05:34.615388    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:05:34.634222    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 11:05:34.679413    8716 start.go:296] duration metric: took 4.6071798s for postStartSetup
	I0210 11:05:34.682026    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:36.722940    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:36.722940    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:36.723033    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:39.118179    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:39.119219    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:39.119466    8716 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\config.json ...
	I0210 11:05:39.121334    8716 start.go:128] duration metric: took 1m55.9799541s to createHost
	I0210 11:05:39.121373    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:41.093774    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:41.094210    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:41.094286    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:43.506546    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:43.506639    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:43.510909    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:43.511544    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:43.511544    8716 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:05:43.650521    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739185543.660188214
	
	I0210 11:05:43.650627    8716 fix.go:216] guest clock: 1739185543.660188214
	I0210 11:05:43.650627    8716 fix.go:229] Guest: 2025-02-10 11:05:43.660188214 +0000 UTC Remote: 2025-02-10 11:05:39.1213738 +0000 UTC m=+529.235586001 (delta=4.538814414s)
	I0210 11:05:43.650728    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:45.651869    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:45.651869    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:45.651967    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:48.044992    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:48.044992    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:48.049097    8716 main.go:141] libmachine: Using SSH client type: native
	I0210 11:05:48.049206    8716 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.243 22 <nil> <nil>}
	I0210 11:05:48.049206    8716 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739185543
	I0210 11:05:48.187998    8716 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 11:05:43 UTC 2025
	
	I0210 11:05:48.188088    8716 fix.go:236] clock set: Mon Feb 10 11:05:43 UTC 2025
	 (err=<nil>)
	I0210 11:05:48.188088    8716 start.go:83] releasing machines lock for "ha-335100-m03", held for 2m5.0466027s
	I0210 11:05:48.188307    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:50.208144    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:50.208144    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:50.208684    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:52.625512    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:52.625563    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:52.630916    8716 out.go:177] * Found network options:
	I0210 11:05:52.633885    8716 out.go:177]   - NO_PROXY=172.29.136.99,172.29.139.212
	W0210 11:05:52.635523    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.635523    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:05:52.638016    8716 out.go:177]   - NO_PROXY=172.29.136.99,172.29.139.212
	W0210 11:05:52.640562    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.640562    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.641902    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 11:05:52.641926    8716 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 11:05:52.643744    8716 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 11:05:52.643902    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:52.650137    8716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:05:52.650137    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:05:54.687880    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:54.688117    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:54.688170    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:54.695202    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:05:57.135405    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:57.135405    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:57.135405    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:57.159899    8716 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:05:57.160900    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:05:57.161310    8716 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:05:57.227139    8716 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5769489s)
	W0210 11:05:57.227240    8716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:05:57.237731    8716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:05:57.242387    8716 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5985167s)
	W0210 11:05:57.242387    8716 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 11:05:57.268555    8716 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:05:57.268555    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:05:57.268873    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:05:57.311690    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 11:05:57.335273    8716 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 11:05:57.335273    8716 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 11:05:57.341607    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:05:57.360753    8716 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:05:57.369626    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:05:57.398873    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:05:57.430454    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:05:57.458700    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:05:57.488502    8716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:05:57.518244    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:05:57.547695    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:05:57.576022    8716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:05:57.604557    8716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:05:57.623307    8716 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:05:57.631729    8716 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:05:57.662800    8716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:05:57.686758    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:05:57.886483    8716 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:05:57.920422    8716 start.go:495] detecting cgroup driver to use...
	I0210 11:05:57.928978    8716 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:05:57.959794    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:05:57.992787    8716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:05:58.027475    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:05:58.060140    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:05:58.095142    8716 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:05:58.154828    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:05:58.179253    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:05:58.222703    8716 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:05:58.236504    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:05:58.254815    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:05:58.294230    8716 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:05:58.484956    8716 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:05:58.667686    8716 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:05:58.667795    8716 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:05:58.707811    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:05:58.892338    8716 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:06:01.499123    8716 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6067548s)
	I0210 11:06:01.508793    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 11:06:01.544137    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:06:01.579742    8716 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 11:06:01.770693    8716 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 11:06:01.954692    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:02.155743    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 11:06:02.194749    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:06:02.231375    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:02.428207    8716 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 11:06:02.537905    8716 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 11:06:02.546560    8716 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 11:06:02.555326    8716 start.go:563] Will wait 60s for crictl version
	I0210 11:06:02.563467    8716 ssh_runner.go:195] Run: which crictl
	I0210 11:06:02.578158    8716 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:06:02.632843    8716 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 11:06:02.640406    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:06:02.682318    8716 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:06:02.721023    8716 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 11:06:02.725667    8716 out.go:177]   - env NO_PROXY=172.29.136.99
	I0210 11:06:02.728570    8716 out.go:177]   - env NO_PROXY=172.29.136.99,172.29.139.212
	I0210 11:06:02.730515    8716 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 11:06:02.735111    8716 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 11:06:02.737429    8716 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 11:06:02.737429    8716 ip.go:214] interface addr: 172.29.128.1/20
	I0210 11:06:02.745849    8716 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 11:06:02.753148    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:06:02.776320    8716 mustload.go:65] Loading cluster: ha-335100
	I0210 11:06:02.777162    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:06:02.777829    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:04.764023    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:04.765029    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:04.765029    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:06:04.765632    8716 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100 for IP: 172.29.143.243
	I0210 11:06:04.765632    8716 certs.go:194] generating shared ca certs ...
	I0210 11:06:04.765707    8716 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.765707    8716 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 11:06:04.766647    8716 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 11:06:04.766702    8716 certs.go:256] generating profile certs ...
	I0210 11:06:04.766702    8716 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\client.key
	I0210 11:06:04.767225    8716 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c
	I0210 11:06:04.767361    8716 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.99 172.29.139.212 172.29.143.243 172.29.143.254]
	I0210 11:06:04.976664    8716 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c ...
	I0210 11:06:04.976664    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c: {Name:mk9ba5b24f65192acbccdfb2285fadb10bd76c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.978001    8716 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c ...
	I0210 11:06:04.978001    8716 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c: {Name:mkb5491b0832431dace075b26866783b7e681dab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:06:04.979517    8716 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt.cdd0df9c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt
	I0210 11:06:04.997446    8716 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key.cdd0df9c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key
	I0210 11:06:04.998447    8716 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key
	I0210 11:06:04.998447    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 11:06:04.998770    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 11:06:04.999880    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 11:06:05.000134    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 11:06:05.000134    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 11:06:05.000755    8716 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 11:06:05.000866    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 11:06:05.000920    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 11:06:05.000920    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 11:06:05.001544    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 11:06:05.001544    8716 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 11:06:05.002203    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 11:06:05.002203    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:07.062589    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:07.062873    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:07.062954    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:06:09.479798    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:06:09.480976    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:09.481217    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:06:09.593251    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0210 11:06:09.601538    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0210 11:06:09.633745    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0210 11:06:09.643962    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0210 11:06:09.673831    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0210 11:06:09.684671    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0210 11:06:09.717023    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0210 11:06:09.724801    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0210 11:06:09.757569    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0210 11:06:09.765160    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0210 11:06:09.793987    8716 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0210 11:06:09.801900    8716 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0210 11:06:09.822876    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:06:09.869554    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:06:09.915239    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:06:09.962387    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:06:10.007395    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0210 11:06:10.057322    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0210 11:06:10.106649    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:06:10.153994    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-335100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0210 11:06:10.202362    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:06:10.250620    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 11:06:10.295441    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 11:06:10.344584    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0210 11:06:10.377868    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0210 11:06:10.408522    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0210 11:06:10.439058    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0210 11:06:10.471521    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0210 11:06:10.502174    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0210 11:06:10.534428    8716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0210 11:06:10.578707    8716 ssh_runner.go:195] Run: openssl version
	I0210 11:06:10.596105    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:06:10.626948    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.634388    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.643570    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:06:10.660544    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:06:10.689750    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 11:06:10.719352    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.727191    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.735632    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 11:06:10.757665    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 11:06:10.786268    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 11:06:10.817001    8716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.825441    8716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.834179    8716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 11:06:10.852526    8716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:06:10.881393    8716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:06:10.888118    8716 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:06:10.888884    8716 kubeadm.go:934] updating node {m03 172.29.143.243 8443 v1.32.1 docker true true} ...
	I0210 11:06:10.888884    8716 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-335100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.143.243
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:06:10.888884    8716 kube-vip.go:115] generating kube-vip config ...
	I0210 11:06:10.896542    8716 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0210 11:06:10.925809    8716 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0210 11:06:10.925907    8716 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0210 11:06:10.934552    8716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:06:10.950456    8716 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0210 11:06:10.958757    8716 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0210 11:06:10.979710    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0210 11:06:10.979753    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0210 11:06:10.979753    8716 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0210 11:06:10.979753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:06:10.979753    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:06:10.991260    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:06:10.991446    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 11:06:10.991446    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 11:06:11.014019    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 11:06:11.014019    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 11:06:11.014019    8716 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:06:11.014019    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0210 11:06:11.014019    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0210 11:06:11.022380    8716 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 11:06:11.074693    8716 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 11:06:11.075271    8716 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0210 11:06:12.178493    8716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0210 11:06:12.197347    8716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0210 11:06:12.233885    8716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:06:12.264780    8716 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0210 11:06:12.305502    8716 ssh_runner.go:195] Run: grep 172.29.143.254	control-plane.minikube.internal$ /etc/hosts
	I0210 11:06:12.312654    8716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:06:12.343296    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:12.541037    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:06:12.574076    8716 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:06:12.574783    8716 start.go:317] joinCluster: &{Name:ha-335100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-335100 Namespace:default APIServerHAVIP:172.
29.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.99 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.139.212 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:06:12.574783    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 11:06:12.574783    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:14.606327    8716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:06:17.087838    8716 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:06:17.088198    8716 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:06:17.088262    8716 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:06:17.288068    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.7129136s)
	I0210 11:06:17.288160    8716 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:06:17.288252    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2z2zrn.nkcfcdek82976009 --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m03 --control-plane --apiserver-advertise-address=172.29.143.243 --apiserver-bind-port=8443"
	I0210 11:06:58.578334    8716 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2z2zrn.nkcfcdek82976009 --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-335100-m03 --control-plane --apiserver-advertise-address=172.29.143.243 --apiserver-bind-port=8443": (41.2896023s)
	I0210 11:06:58.579779    8716 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 11:06:59.264069    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-335100-m03 minikube.k8s.io/updated_at=2025_02_10T11_06_59_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=ha-335100 minikube.k8s.io/primary=false
	I0210 11:06:59.416137    8716 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-335100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0210 11:06:59.597705    8716 start.go:319] duration metric: took 47.0223761s to joinCluster
	I0210 11:06:59.598136    8716 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.29.143.243 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:06:59.598687    8716 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:06:59.622819    8716 out.go:177] * Verifying Kubernetes components...
	I0210 11:06:59.638372    8716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:06:59.964902    8716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:06:59.993297    8716 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:06:59.993879    8716 kapi.go:59] client config for ha-335100: &rest.Config{Host:"https://172.29.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-335100\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0210 11:06:59.993879    8716 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.143.254:8443 with https://172.29.136.99:8443
	I0210 11:06:59.994766    8716 node_ready.go:35] waiting up to 6m0s for node "ha-335100-m03" to be "Ready" ...
	I0210 11:06:59.994991    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:06:59.994991    8716 round_trippers.go:476] Request Headers:
	I0210 11:06:59.994991    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:06:59.995073    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:00.010316    8716 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0210 11:07:00.495925    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:00.495925    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:00.495925    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:00.495925    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:00.501392    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:00.995607    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:00.995607    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:00.995607    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:00.995607    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:01.009859    8716 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 11:07:01.495982    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:01.495982    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:01.495982    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:01.495982    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:01.501850    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:01.995551    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:01.995551    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:01.995551    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:01.995551    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.008887    8716 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 11:07:02.009138    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:02.495647    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:02.495647    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:02.495647    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:02.495647    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.500655    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:02.995202    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:02.995202    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:02.995202    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:02.995202    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:03.026092    8716 round_trippers.go:581] Response Status: 200 OK in 30 milliseconds
	I0210 11:07:03.496347    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:03.496347    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:03.496347    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:03.496347    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:03.501974    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:03.995145    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:03.995145    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:03.995145    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:03.995145    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.005153    8716 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0210 11:07:04.495809    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:04.495809    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:04.495877    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.495877    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:04.501532    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:04.501532    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:04.996633    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:04.996751    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:04.996751    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:04.996751    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:05.002269    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:05.495489    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:05.495489    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:05.495489    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:05.495489    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:05.610530    8716 round_trippers.go:581] Response Status: 200 OK in 115 milliseconds
	I0210 11:07:05.995197    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:05.995197    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:05.995197    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:05.995197    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:06.000705    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:06.495338    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:06.495338    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:06.495338    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:06.495338    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:06.500646    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:06.995985    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:06.996047    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:06.996107    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:06.996107    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:07.002234    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:07.002596    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:07.496304    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:07.496304    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:07.496304    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:07.496304    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:07.500878    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:07.996077    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:07.996157    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:07.996157    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:07.996157    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:08.001017    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:08.495926    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:08.495926    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:08.495926    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:08.495926    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:08.501081    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:08.995324    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:08.995324    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:08.995324    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:08.995324    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:09.000292    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:09.495596    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:09.496014    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:09.496014    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:09.496014    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:09.501585    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:09.501981    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:09.996018    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:09.996018    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:09.996018    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:09.996018    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:10.001430    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:10.495766    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:10.495766    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:10.495766    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:10.495766    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:10.501091    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:10.995022    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:10.995022    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:10.995022    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:10.995022    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:11.000277    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:11.495511    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:11.495511    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:11.495511    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:11.495511    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:11.501446    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:11.997309    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:11.997309    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:11.997309    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:11.997309    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:12.003132    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:12.003433    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:12.496399    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:12.496399    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:12.496470    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:12.496470    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:12.501424    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:12.995715    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:12.995715    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:12.995715    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:12.995715    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:13.001276    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:13.495516    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:13.495516    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:13.495586    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:13.495586    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:13.506956    8716 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 11:07:13.995458    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:13.995458    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:13.995458    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:13.995458    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:14.001513    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:14.496067    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:14.496067    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:14.496067    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:14.496067    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:14.501677    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:14.501677    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:14.996471    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:14.996471    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:14.996471    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:14.996471    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:15.001790    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:15.496341    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:15.496341    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:15.496416    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:15.496416    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:15.501201    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:15.996278    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:15.996354    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:15.996354    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:15.996354    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.002237    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:16.496696    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:16.496696    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:16.496696    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:16.496696    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.505315    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:16.505315    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:16.996001    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:16.996001    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:16.996001    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:16.996001    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.001502    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:17.496005    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:17.496005    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:17.496005    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.496005    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:17.501103    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:17.995645    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:17.995645    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:17.995645    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:17.995645    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:18.000757    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:18.496080    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:18.496482    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:18.496482    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:18.496553    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:18.501160    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:18.996617    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:18.996617    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:18.996617    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:18.996617    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:19.000935    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:19.000935    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:19.497592    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:19.497592    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:19.497592    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:19.497592    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:19.503678    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:19.996355    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:19.996355    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:19.996424    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:19.996424    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.003488    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:20.496007    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:20.496007    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:20.496007    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:20.496007    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.501352    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:20.995396    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:20.995826    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:20.995826    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:20.995826    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.001056    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:21.006143    8716 node_ready.go:53] node "ha-335100-m03" has status "Ready":"False"
	I0210 11:07:21.496840    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:21.496840    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:21.496924    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.496924    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:21.503078    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:21.996375    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:21.996375    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:21.996375    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:21.996375    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:22.002139    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:22.496215    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:22.496302    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:22.496302    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:22.496302    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:22.501730    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:22.995803    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:22.995803    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:22.995803    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:22.995803    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.000911    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.001794    8716 node_ready.go:49] node "ha-335100-m03" has status "Ready":"True"
	I0210 11:07:23.001861    8716 node_ready.go:38] duration metric: took 23.0067619s for node "ha-335100-m03" to be "Ready" ...
	I0210 11:07:23.001861    8716 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:07:23.001944    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:23.001944    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.001944    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.001944    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.008059    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:23.010098    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.010274    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-gc5gf
	I0210 11:07:23.010328    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.010328    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.010328    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.015070    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:23.015691    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.015721    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.015721    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.015721    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.019447    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.020298    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.020325    8716 pod_ready.go:82] duration metric: took 10.1675ms for pod "coredns-668d6bf9bc-gc5gf" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.020325    8716 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.020325    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-s44gp
	I0210 11:07:23.020325    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.020325    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.020325    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.023967    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.024790    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.024834    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.024834    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.024864    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.028619    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.028619    8716 pod_ready.go:93] pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.028619    8716 pod_ready.go:82] duration metric: took 8.2944ms for pod "coredns-668d6bf9bc-s44gp" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.028619    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.028619    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100
	I0210 11:07:23.028619    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.028619    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.028619    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.032713    8716 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:07:23.034447    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.034541    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.034541    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.034541    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.039204    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:23.039827    8716 pod_ready.go:93] pod "etcd-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.039827    8716 pod_ready.go:82] duration metric: took 11.2076ms for pod "etcd-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.039827    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.039827    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m02
	I0210 11:07:23.039827    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.039827    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.039827    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.065385    8716 round_trippers.go:581] Response Status: 200 OK in 25 milliseconds
	I0210 11:07:23.065547    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:23.065547    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.065547    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.065547    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.071463    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.072305    8716 pod_ready.go:93] pod "etcd-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.072378    8716 pod_ready.go:82] duration metric: took 32.5504ms for pod "etcd-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.072378    8716 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.196129    8716 request.go:661] Waited for 123.6666ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m03
	I0210 11:07:23.196129    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/etcd-ha-335100-m03
	I0210 11:07:23.196129    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.196129    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.196129    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.201858    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.396650    8716 request.go:661] Waited for 193.7727ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:23.396650    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:23.396650    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.396650    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.396650    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.402347    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.402911    8716 pod_ready.go:93] pod "etcd-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.402982    8716 pod_ready.go:82] duration metric: took 330.6006ms for pod "etcd-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.402982    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.596531    8716 request.go:661] Waited for 193.4118ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:07:23.596981    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100
	I0210 11:07:23.597052    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.597052    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.597052    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.602918    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.796109    8716 request.go:661] Waited for 192.27ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.796575    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:23.796654    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.796654    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:23.796654    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.802257    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:23.803008    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:23.803008    8716 pod_ready.go:82] duration metric: took 400.0211ms for pod "kube-apiserver-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.803008    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:23.995899    8716 request.go:661] Waited for 192.7914ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:07:23.995899    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m02
	I0210 11:07:23.995899    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:23.995899    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:23.995899    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.004105    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:24.196286    8716 request.go:661] Waited for 191.756ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:24.196286    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:24.196286    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.196286    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.196286    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.202219    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:24.202522    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:24.202522    8716 pod_ready.go:82] duration metric: took 399.4114ms for pod "kube-apiserver-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.202522    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.396676    8716 request.go:661] Waited for 194.05ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m03
	I0210 11:07:24.397153    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-335100-m03
	I0210 11:07:24.397153    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.397153    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.397153    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.414380    8716 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 11:07:24.596715    8716 request.go:661] Waited for 182.1205ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:24.596715    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:24.596715    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.596715    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.596715    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.602619    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:24.602938    8716 pod_ready.go:93] pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:24.603052    8716 pod_ready.go:82] duration metric: took 400.5261ms for pod "kube-apiserver-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.603052    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:24.796423    8716 request.go:661] Waited for 193.2651ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:07:24.796423    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100
	I0210 11:07:24.796423    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.796423    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.796423    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:24.802443    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:24.996691    8716 request.go:661] Waited for 193.2589ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:24.996691    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:24.996691    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:24.996691    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:24.996691    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.002489    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:25.002489    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.002489    8716 pod_ready.go:82] duration metric: took 399.432ms for pod "kube-controller-manager-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.002489    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.195910    8716 request.go:661] Waited for 193.4189ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:07:25.196250    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m02
	I0210 11:07:25.196393    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.196420    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.196420    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.202596    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:25.396083    8716 request.go:661] Waited for 192.3614ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:25.396404    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:25.396404    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.396404    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.396404    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.402058    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:25.402458    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.402522    8716 pod_ready.go:82] duration metric: took 399.9644ms for pod "kube-controller-manager-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.402522    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.595984    8716 request.go:661] Waited for 193.3413ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m03
	I0210 11:07:25.596427    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-335100-m03
	I0210 11:07:25.596493    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.596493    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.596493    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.601751    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:25.796976    8716 request.go:661] Waited for 195.2228ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:25.796976    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:25.796976    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.796976    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.796976    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:25.804074    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:25.804877    8716 pod_ready.go:93] pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:25.804979    8716 pod_ready.go:82] duration metric: took 402.4533ms for pod "kube-controller-manager-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.804979    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:25.996125    8716 request.go:661] Waited for 191.0305ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:07:25.996125    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b5xnq
	I0210 11:07:25.996125    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:25.996125    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:25.996125    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.001647    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:26.195951    8716 request.go:661] Waited for 193.482ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:26.196400    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:26.196400    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.196400    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.196400    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.201307    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:26.201612    8716 pod_ready.go:93] pod "kube-proxy-b5xnq" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:26.201770    8716 pod_ready.go:82] duration metric: took 396.7155ms for pod "kube-proxy-b5xnq" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.201770    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b9g27" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.396766    8716 request.go:661] Waited for 194.7896ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9g27
	I0210 11:07:26.396766    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-b9g27
	I0210 11:07:26.396766    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.396766    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.396766    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.403095    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:26.597069    8716 request.go:661] Waited for 193.8082ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:26.597349    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:26.597349    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.597349    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.597349    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.605574    8716 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:07:26.605911    8716 pod_ready.go:93] pod "kube-proxy-b9g27" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:26.606064    8716 pod_ready.go:82] duration metric: took 404.2891ms for pod "kube-proxy-b9g27" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.606064    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:26.795958    8716 request.go:661] Waited for 189.676ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:07:26.796192    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzs7w
	I0210 11:07:26.796192    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.796192    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:26.796192    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.804013    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:26.996512    8716 request.go:661] Waited for 191.5085ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:26.996859    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:26.996859    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:26.996859    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:26.996859    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.002467    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.003128    8716 pod_ready.go:93] pod "kube-proxy-xzs7w" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.003128    8716 pod_ready.go:82] duration metric: took 397.0596ms for pod "kube-proxy-xzs7w" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.003128    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.196642    8716 request.go:661] Waited for 193.3443ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:07:27.196642    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100
	I0210 11:07:27.196642    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.196642    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.196642    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.203093    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.397011    8716 request.go:661] Waited for 193.5149ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:27.397236    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100
	I0210 11:07:27.397236    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.397236    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.397236    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.402240    8716 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:07:27.402240    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.402240    8716 pod_ready.go:82] duration metric: took 399.0114ms for pod "kube-scheduler-ha-335100" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.402240    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.596401    8716 request.go:661] Waited for 193.6151ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:07:27.596401    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m02
	I0210 11:07:27.596401    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.596401    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.596401    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.601772    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:27.797172    8716 request.go:661] Waited for 195.334ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:27.797172    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m02
	I0210 11:07:27.797172    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.797172    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:27.797172    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.803289    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:27.803612    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:27.803612    8716 pod_ready.go:82] duration metric: took 401.3673ms for pod "kube-scheduler-ha-335100-m02" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.803762    8716 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:27.996417    8716 request.go:661] Waited for 192.653ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m03
	I0210 11:07:27.996417    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-335100-m03
	I0210 11:07:27.996417    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:27.996417    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:27.996417    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.002489    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.195920    8716 request.go:661] Waited for 192.5439ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:28.195920    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes/ha-335100-m03
	I0210 11:07:28.195920    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.196264    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.196264    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.201136    8716 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:07:28.202162    8716 pod_ready.go:93] pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace has status "Ready":"True"
	I0210 11:07:28.202245    8716 pod_ready.go:82] duration metric: took 398.4788ms for pod "kube-scheduler-ha-335100-m03" in "kube-system" namespace to be "Ready" ...
	I0210 11:07:28.202245    8716 pod_ready.go:39] duration metric: took 5.2003237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:07:28.202319    8716 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:07:28.210177    8716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:07:28.235447    8716 api_server.go:72] duration metric: took 28.6369309s to wait for apiserver process to appear ...
	I0210 11:07:28.235447    8716 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:07:28.235566    8716 api_server.go:253] Checking apiserver healthz at https://172.29.136.99:8443/healthz ...
	I0210 11:07:28.246305    8716 api_server.go:279] https://172.29.136.99:8443/healthz returned 200:
	ok
	I0210 11:07:28.246305    8716 round_trippers.go:470] GET https://172.29.136.99:8443/version
	I0210 11:07:28.246305    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.246305    8716 round_trippers.go:480]     Accept: application/json, */*
	I0210 11:07:28.246305    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.247969    8716 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 11:07:28.247969    8716 api_server.go:141] control plane version: v1.32.1
	I0210 11:07:28.247969    8716 api_server.go:131] duration metric: took 12.4024ms to wait for apiserver health ...
	I0210 11:07:28.247969    8716 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:07:28.396724    8716 request.go:661] Waited for 148.7531ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.397026    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.397026    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.397026    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.397026    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.403068    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.405177    8716 system_pods.go:59] 24 kube-system pods found
	I0210 11:07:28.405177    8716 system_pods.go:61] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "etcd-ha-335100-m03" [86de14e3-89f9-4408-94b1-3881bddea6d4] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-lc7hv" [499e3fe2-6d2a-4e55-bc84-153216c1896b] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-apiserver-ha-335100-m03" [61432db2-f474-42cb-b1a2-fd460d25d68d] Running
	I0210 11:07:28.405398    8716 system_pods.go:61] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-controller-manager-ha-335100-m03" [7d4b5c47-5a71-44e3-9c45-aec1c1884fd3] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:07:28.405482    8716 system_pods.go:61] "kube-proxy-b9g27" [b7e5d47d-6677-4d8c-ae0c-b1659c589609] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-scheduler-ha-335100-m03" [92efc5a4-0a3e-48db-95fb-ec22c16729f3] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "kube-vip-ha-335100-m03" [0bf9308c-c321-45b3-930b-0129922cc7a5] Running
	I0210 11:07:28.405519    8716 system_pods.go:61] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:07:28.405519    8716 system_pods.go:74] duration metric: took 157.5481ms to wait for pod list to return data ...
	I0210 11:07:28.405584    8716 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:07:28.595940    8716 request.go:661] Waited for 190.2805ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:07:28.596274    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:07:28.596274    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.596274    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.596274    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.602598    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.602757    8716 default_sa.go:45] found service account: "default"
	I0210 11:07:28.602757    8716 default_sa.go:55] duration metric: took 197.171ms for default service account to be created ...
	I0210 11:07:28.602825    8716 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:07:28.796839    8716 request.go:661] Waited for 194.0112ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.797151    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/namespaces/kube-system/pods
	I0210 11:07:28.797151    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.797151    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:28.797151    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.803742    8716 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:07:28.807080    8716 system_pods.go:86] 24 kube-system pods found
	I0210 11:07:28.807080    8716 system_pods.go:89] "coredns-668d6bf9bc-gc5gf" [1a78ccff-3a66-49ce-9c52-79a7799a56e2] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "coredns-668d6bf9bc-s44gp" [1a2504f9-357f-418a-968c-274b6ab1bbe3] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100" [da325f74-680c-43cc-9b62-9c660938b7f5] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100-m02" [96814d40-8c96-4117-b028-2417473031c2] Running
	I0210 11:07:28.807080    8716 system_pods.go:89] "etcd-ha-335100-m03" [86de14e3-89f9-4408-94b1-3881bddea6d4] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-hpmm5" [74943229-cd2a-4365-9da3-f2be8a2e2663] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-lc7hv" [499e3fe2-6d2a-4e55-bc84-153216c1896b] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kindnet-slpqn" [6d170b01-e6f2-451d-8427-2b4ef3923739] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kube-apiserver-ha-335100" [f37d02ff-6c3f-47d5-b98e-73ec57c5b54d] Running
	I0210 11:07:28.807183    8716 system_pods.go:89] "kube-apiserver-ha-335100-m02" [eac020f2-732f-4fba-bde6-266a2729c5b9] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-apiserver-ha-335100-m03" [61432db2-f474-42cb-b1a2-fd460d25d68d] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100" [3e128f0d-49f1-42ca-b8da-5ff50b9b35b6] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m02" [b5bf933c-7312-433b-aeb7-b299753ff1a6] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-controller-manager-ha-335100-m03" [7d4b5c47-5a71-44e3-9c45-aec1c1884fd3] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-b5xnq" [27ffc58d-1979-40e7-977e-1c5dc46f735a] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-b9g27" [b7e5d47d-6677-4d8c-ae0c-b1659c589609] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-proxy-xzs7w" [5c66a9a8-138b-4b0f-a112-05946040c18b] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100" [8daa77b3-ed4d-4905-b83f-d7db23bef734] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100-m02" [cb414823-0dd1-4f0f-8b0a-9314acb1324d] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-scheduler-ha-335100-m03" [92efc5a4-0a3e-48db-95fb-ec22c16729f3] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-vip-ha-335100" [89e60425-f699-41e3-8034-d871650ab57c] Running
	I0210 11:07:28.807237    8716 system_pods.go:89] "kube-vip-ha-335100-m02" [e21ce143-8caa-4155-ade4-b9b428e82d2b] Running
	I0210 11:07:28.807317    8716 system_pods.go:89] "kube-vip-ha-335100-m03" [0bf9308c-c321-45b3-930b-0129922cc7a5] Running
	I0210 11:07:28.807317    8716 system_pods.go:89] "storage-provisioner" [91b7ca8c-ef3a-4328-83f9-63411c7672cb] Running
	I0210 11:07:28.807317    8716 system_pods.go:126] duration metric: took 204.4894ms to wait for k8s-apps to be running ...
	I0210 11:07:28.807317    8716 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:07:28.814985    8716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:07:28.841445    8716 system_svc.go:56] duration metric: took 34.1276ms WaitForService to wait for kubelet
	I0210 11:07:28.841445    8716 kubeadm.go:582] duration metric: took 29.2429222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:07:28.841445    8716 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:07:28.996882    8716 request.go:661] Waited for 155.4357ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.99:8443/api/v1/nodes
	I0210 11:07:28.996882    8716 round_trippers.go:470] GET https://172.29.136.99:8443/api/v1/nodes
	I0210 11:07:28.996882    8716 round_trippers.go:476] Request Headers:
	I0210 11:07:28.996882    8716 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:07:28.996882    8716 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:07:29.003974    8716 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:07:29.004474    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004533    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004533    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004533    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004533    8716 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:07:29.004597    8716 node_conditions.go:123] node cpu capacity is 2
	I0210 11:07:29.004597    8716 node_conditions.go:105] duration metric: took 163.1497ms to run NodePressure ...
	I0210 11:07:29.004597    8716 start.go:241] waiting for startup goroutines ...
	I0210 11:07:29.004663    8716 start.go:255] writing updated cluster config ...
	I0210 11:07:29.013641    8716 ssh_runner.go:195] Run: rm -f paused
	I0210 11:07:29.150620    8716 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 11:07:29.157621    8716 out.go:177] * Done! kubectl is now configured to use "ha-335100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721004486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721083387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.721468791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730166580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730603384Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.730644485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:00:16 ha-335100 dockerd[1449]: time="2025-02-10T11:00:16.731087789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859662395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859744397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859757797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:04 ha-335100 dockerd[1449]: time="2025-02-10T11:08:04.859908100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:05 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:08:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f44cef7fce909445d3740b68b1d8a594c199ae7ab48880497e2640bc09f9ede6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 10 11:08:06 ha-335100 cri-dockerd[1342]: time="2025-02-10T11:08:06Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.735999365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736067565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736081465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:08:06 ha-335100 dockerd[1449]: time="2025-02-10T11:08:06.736183966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:21:41 ha-335100 dockerd[1442]: time="2025-02-10T11:21:41.860508897Z" level=info msg="ignoring event" container=826b316789d5d6be25b6c63a9712e2baad8151a5881be36f949d87743db968bf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 10 11:21:41 ha-335100 dockerd[1449]: time="2025-02-10T11:21:41.860989602Z" level=info msg="shim disconnected" id=826b316789d5d6be25b6c63a9712e2baad8151a5881be36f949d87743db968bf namespace=moby
	Feb 10 11:21:41 ha-335100 dockerd[1449]: time="2025-02-10T11:21:41.861078603Z" level=warning msg="cleaning up after shim disconnected" id=826b316789d5d6be25b6c63a9712e2baad8151a5881be36f949d87743db968bf namespace=moby
	Feb 10 11:21:41 ha-335100 dockerd[1449]: time="2025-02-10T11:21:41.861091403Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 10 11:21:43 ha-335100 dockerd[1449]: time="2025-02-10T11:21:43.499741821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:21:43 ha-335100 dockerd[1449]: time="2025-02-10T11:21:43.500172126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:21:43 ha-335100 dockerd[1449]: time="2025-02-10T11:21:43.500252026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:21:43 ha-335100 dockerd[1449]: time="2025-02-10T11:21:43.501589741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5d8c31da72f6       22f88dde2caa4                                                                                         6 minutes ago       Running             kube-vip                  1                   2767bce183d0e       kube-vip-ha-335100
	dd08a9f3cc944       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Running             busybox                   0                   f44cef7fce909       busybox-58667487b6-5px7z
	e7ca26bd041b3       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   df6e532050a6d       coredns-668d6bf9bc-s44gp
	4fd9a115fcdaa       c69fa2e9cbf5f                                                                                         27 minutes ago      Running             coredns                   0                   175d7cc8c7a0e       coredns-668d6bf9bc-gc5gf
	0932284881cdb       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   5f14e7cec489a       storage-provisioner
	22d0df1da0c61       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              28 minutes ago      Running             kindnet-cni               0                   c77ff26256f03       kindnet-hpmm5
	f1c5561320957       e29f9c7391fd9                                                                                         28 minutes ago      Running             kube-proxy                0                   32145bbdfaf77       kube-proxy-xzs7w
	826b316789d5d       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     28 minutes ago      Exited              kube-vip                  0                   2767bce183d0e       kube-vip-ha-335100
	5228f69640c2d       019ee182b58e2                                                                                         28 minutes ago      Running             kube-controller-manager   0                   517f5b55c25ac       kube-controller-manager-ha-335100
	25b39e8ce1a49       2b0d6572d062c                                                                                         28 minutes ago      Running             kube-scheduler            0                   b0b115a752128       kube-scheduler-ha-335100
	256becfc62338       95c0bda56fc4d                                                                                         28 minutes ago      Running             kube-apiserver            0                   c99f6d2953c5b       kube-apiserver-ha-335100
	22c1f77dda7a3       a9e7e6b294baf                                                                                         28 minutes ago      Running             etcd                      0                   dcea40235e346       etcd-ha-335100
	
	
	==> coredns [4fd9a115fcda] <==
	[INFO] 10.244.1.2:42435 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000330903s
	[INFO] 10.244.1.2:42112 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000180901s
	[INFO] 10.244.1.2:35959 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000287602s
	[INFO] 10.244.2.2:36308 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000181401s
	[INFO] 10.244.2.2:49545 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150601s
	[INFO] 10.244.2.2:37477 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119001s
	[INFO] 10.244.2.2:49576 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000161102s
	[INFO] 10.244.2.2:53438 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000148201s
	[INFO] 10.244.0.4:39119 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000233802s
	[INFO] 10.244.0.4:33066 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000291503s
	[INFO] 10.244.0.4:42248 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000195401s
	[INFO] 10.244.1.2:42786 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000441003s
	[INFO] 10.244.1.2:52892 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000269702s
	[INFO] 10.244.1.2:36279 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000129801s
	[INFO] 10.244.1.2:37975 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147601s
	[INFO] 10.244.0.4:42121 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218902s
	[INFO] 10.244.0.4:55956 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000315903s
	[INFO] 10.244.0.4:39006 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132201s
	[INFO] 10.244.0.4:53772 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000151802s
	[INFO] 10.244.1.2:60679 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204102s
	[INFO] 10.244.1.2:45025 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000127801s
	[INFO] 10.244.1.2:42238 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000158902s
	[INFO] 10.244.1.2:53719 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000171801s
	[INFO] 10.244.2.2:55195 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000357103s
	[INFO] 10.244.2.2:40415 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187202s
	
	
	==> coredns [e7ca26bd041b] <==
	[INFO] 10.244.0.4:59785 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000444503s
	[INFO] 10.244.0.4:56785 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.322386012s
	[INFO] 10.244.0.4:49790 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.060760012s
	[INFO] 10.244.0.4:37219 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.054123172s
	[INFO] 10.244.1.2:44423 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000299503s
	[INFO] 10.244.1.2:58000 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000173802s
	[INFO] 10.244.2.2:46499 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000212201s
	[INFO] 10.244.2.2:46942 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000139401s
	[INFO] 10.244.0.4:51006 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000189902s
	[INFO] 10.244.0.4:53888 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000134701s
	[INFO] 10.244.0.4:60890 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000251802s
	[INFO] 10.244.1.2:52016 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017914651s
	[INFO] 10.244.1.2:42358 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000244602s
	[INFO] 10.244.1.2:35935 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000190502s
	[INFO] 10.244.1.2:60194 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148801s
	[INFO] 10.244.2.2:36357 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000167801s
	[INFO] 10.244.2.2:56554 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.110643534s
	[INFO] 10.244.2.2:53676 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000097501s
	[INFO] 10.244.0.4:54207 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000265102s
	[INFO] 10.244.2.2:52348 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000132401s
	[INFO] 10.244.2.2:55759 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000255302s
	[INFO] 10.244.2.2:33661 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000446004s
	[INFO] 10.244.2.2:58546 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000851s
	[INFO] 10.244.2.2:41756 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000147702s
	[INFO] 10.244.2.2:57463 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000312502s
	
	
	==> describe nodes <==
	Name:               ha-335100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T10_59_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 10:59:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:27:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:27:59 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:27:59 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:27:59 +0000   Mon, 10 Feb 2025 10:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:27:59 +0000   Mon, 10 Feb 2025 11:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.136.99
	  Hostname:    ha-335100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b6b0e4336344490cabbe3838ec21fcfa
	  System UUID:                880d7589-4827-264e-a5a8-fd64393ef394
	  Boot ID:                    4de3dd87-b349-4fcc-a75e-64fd8a7b6e07
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-5px7z             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-668d6bf9bc-gc5gf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 coredns-668d6bf9bc-s44gp             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-ha-335100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         28m
	  kube-system                 kindnet-hpmm5                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      28m
	  kube-system                 kube-apiserver-ha-335100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-controller-manager-ha-335100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-proxy-xzs7w                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-ha-335100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-vip-ha-335100                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-335100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-335100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-335100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28m   node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-335100 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-335100 event: Registered Node ha-335100 in Controller
	
	
	Name:               ha-335100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T11_03_13_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:03:09 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:23:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Feb 2025 11:23:25 +0000   Mon, 10 Feb 2025 11:24:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Feb 2025 11:23:25 +0000   Mon, 10 Feb 2025 11:24:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Feb 2025 11:23:25 +0000   Mon, 10 Feb 2025 11:24:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Feb 2025 11:23:25 +0000   Mon, 10 Feb 2025 11:24:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.29.139.212
	  Hostname:    ha-335100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 360d1205d30e4b4489e68c8ddd033d40
	  System UUID:                021fcea2-6be6-324c-9cb2-94399cbeee0d
	  Boot ID:                    d76bcd42-d2fc-4cdc-92b9-b38a76650906
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-r8blr                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-335100-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         24m
	  kube-system                 kindnet-slpqn                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-apiserver-ha-335100-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-controller-manager-ha-335100-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-proxy-b5xnq                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-scheduler-ha-335100-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 kube-vip-ha-335100-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  RegisteredNode           24m                node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-335100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-335100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node ha-335100-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-335100-m02 event: Registered Node ha-335100-m02 in Controller
	  Normal  NodeNotReady             3m9s               node-controller  Node ha-335100-m02 status is now: NodeNotReady
	
	
	Name:               ha-335100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T11_06_59_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:06:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:27:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:26:56 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:26:56 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:26:56 +0000   Mon, 10 Feb 2025 11:06:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:26:56 +0000   Mon, 10 Feb 2025 11:07:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.143.243
	  Hostname:    ha-335100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3932c0d931034d08a153f36a4dde5a97
	  System UUID:                9c6bee06-3b2b-5b49-bb4d-c446daaf4d5e
	  Boot ID:                    bbfe75b5-4310-4a0c-8e67-52ceff178ebb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-vq9s4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 etcd-ha-335100-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kindnet-lc7hv                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	  kube-system                 kube-apiserver-ha-335100-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-ha-335100-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-b9g27                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-ha-335100-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-vip-ha-335100-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-335100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-335100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-335100-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-335100-m03 event: Registered Node ha-335100-m03 in Controller
	
	
	Name:               ha-335100-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-335100-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=ha-335100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T11_12_05_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:12:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-335100-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 11:27:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 11:24:08 +0000   Mon, 10 Feb 2025 11:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 11:24:08 +0000   Mon, 10 Feb 2025 11:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 11:24:08 +0000   Mon, 10 Feb 2025 11:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 11:24:08 +0000   Mon, 10 Feb 2025 11:12:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.135.124
	  Hostname:    ha-335100-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0afad1f5588c4632b675a776a4b91357
	  System UUID:                832d3f6c-d7bf-3f4e-8394-f0cd6007b41e
	  Boot ID:                    ea0a0f99-f9a5-452e-8e68-fed48b099237
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-gf49d       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      15m
	  kube-system                 kube-proxy-l6jlb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x2 over 15m)  kubelet          Node ha-335100-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x2 over 15m)  kubelet          Node ha-335100-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x2 over 15m)  kubelet          Node ha-335100-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m                node-controller  Node ha-335100-m04 event: Registered Node ha-335100-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-335100-m04 event: Registered Node ha-335100-m04 in Controller
	  Normal  RegisteredNode           15m                node-controller  Node ha-335100-m04 event: Registered Node ha-335100-m04 in Controller
	  Normal  NodeReady                15m                kubelet          Node ha-335100-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +7.376728] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 10:58] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.170548] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[Feb10 10:59] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +0.106671] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.492197] systemd-fstab-generator[1046]: Ignoring "noauto" option for root device
	[  +0.194850] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.214117] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +2.870895] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.192284] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.221120] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.252867] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[ +10.665269] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.104168] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.748464] systemd-fstab-generator[1699]: Ignoring "noauto" option for root device
	[  +7.461490] systemd-fstab-generator[1857]: Ignoring "noauto" option for root device
	[  +0.103821] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.762085] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.810542] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +7.325683] kauditd_printk_skb: 17 callbacks suppressed
	[Feb10 11:00] kauditd_printk_skb: 29 callbacks suppressed
	[Feb10 11:02] hrtimer: interrupt took 8772094 ns
	[Feb10 11:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [22c1f77dda7a] <==
	{"level":"warn","ts":"2025-02-10T11:27:59.751573Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.815193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.825553Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.830425Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.839661Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.850384Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.851605Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.858814Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.863765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.867660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.873193Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.880339Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.882488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.889758Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.894866Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.902503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.910977Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.919258Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.927456Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.933505Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.938269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.943497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.951297Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.951651Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-02-10T11:27:59.959560Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"98d9005d4e04d5ff","from":"98d9005d4e04d5ff","remote-peer-id":"411f09409d6b0b30","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:28:00 up 30 min,  0 users,  load average: 0.17, 0.32, 0.35
	Linux ha-335100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [22d0df1da0c6] <==
	I0210 11:27:21.716442       1 main.go:324] Node ha-335100-m04 has CIDR [10.244.3.0/24] 
	I0210 11:27:31.713843       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:27:31.713945       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:27:31.714212       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:27:31.714340       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:27:31.714686       1 main.go:297] Handling node with IPs: map[172.29.135.124:{}]
	I0210 11:27:31.714772       1 main.go:324] Node ha-335100-m04 has CIDR [10.244.3.0/24] 
	I0210 11:27:31.714910       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:27:31.714936       1 main.go:301] handling current node
	I0210 11:27:41.716524       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:27:41.716636       1 main.go:301] handling current node
	I0210 11:27:41.716657       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:27:41.716664       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	I0210 11:27:41.717140       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:27:41.717169       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:27:41.717345       1 main.go:297] Handling node with IPs: map[172.29.135.124:{}]
	I0210 11:27:41.717370       1 main.go:324] Node ha-335100-m04 has CIDR [10.244.3.0/24] 
	I0210 11:27:51.713697       1 main.go:297] Handling node with IPs: map[172.29.143.243:{}]
	I0210 11:27:51.713806       1 main.go:324] Node ha-335100-m03 has CIDR [10.244.2.0/24] 
	I0210 11:27:51.714488       1 main.go:297] Handling node with IPs: map[172.29.135.124:{}]
	I0210 11:27:51.714507       1 main.go:324] Node ha-335100-m04 has CIDR [10.244.3.0/24] 
	I0210 11:27:51.714630       1 main.go:297] Handling node with IPs: map[172.29.136.99:{}]
	I0210 11:27:51.714654       1 main.go:301] handling current node
	I0210 11:27:51.714668       1 main.go:297] Handling node with IPs: map[172.29.139.212:{}]
	I0210 11:27:51.714672       1 main.go:324] Node ha-335100-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [256becfc6233] <==
	E0210 11:06:53.245670       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 8.601µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0210 11:06:53.247107       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0210 11:06:53.248311       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:06:53.250712       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="5.296108ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-335100-m03.1822d4198e00ff44" result=null
	E0210 11:08:11.617837       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56657: use of closed network connection
	E0210 11:08:12.157552       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56659: use of closed network connection
	E0210 11:08:13.832598       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56661: use of closed network connection
	E0210 11:08:14.825804       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56663: use of closed network connection
	E0210 11:08:15.314083       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56666: use of closed network connection
	E0210 11:08:15.921583       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56668: use of closed network connection
	E0210 11:08:16.415611       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56670: use of closed network connection
	E0210 11:08:16.890132       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56672: use of closed network connection
	E0210 11:08:17.347465       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56674: use of closed network connection
	E0210 11:08:18.182983       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56677: use of closed network connection
	E0210 11:08:28.641180       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56679: use of closed network connection
	E0210 11:08:29.104109       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56681: use of closed network connection
	E0210 11:08:39.574461       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56683: use of closed network connection
	E0210 11:08:40.038689       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56686: use of closed network connection
	E0210 11:08:50.510550       1 conn.go:339] Error on socket receive: read tcp 172.29.143.254:8443->172.29.128.1:56688: use of closed network connection
	E0210 11:21:39.791890       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:21:39.792048       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 6.3µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0210 11:21:39.793476       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0210 11:21:39.794966       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0210 11:21:39.796454       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.735453ms" method="PUT" path="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result=null
	W0210 11:24:14.420525       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.136.99 172.29.143.243]
	
	
	==> kube-controller-manager [5228f69640c2] <==
	I0210 11:12:15.076547       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:12:34.009760       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:12:34.013795       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-335100-m04"
	I0210 11:12:34.031289       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:12:34.110932       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:12:35.419044       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:12:42.963033       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100"
	I0210 11:13:12.491210       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:16:44.696457       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:17:49.289761       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100"
	I0210 11:18:18.151154       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:19:02.303839       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:21:50.789843       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:22:53.685135       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100"
	I0210 11:23:25.367923       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:24:08.726882       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m04"
	I0210 11:24:50.148750       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:24:50.150122       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-335100-m04"
	I0210 11:24:50.197363       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:24:50.391982       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.431185ms"
	I0210 11:24:50.392081       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="41.501µs"
	I0210 11:24:54.354631       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:24:55.512466       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m02"
	I0210 11:26:56.704636       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100-m03"
	I0210 11:27:59.658087       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-335100"
	
	
	==> kube-proxy [f1c556132095] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 10:59:52.961920       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 10:59:52.974540       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.99"]
	E0210 10:59:52.974683       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 10:59:53.036518       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 10:59:53.036649       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 10:59:53.036679       1 server_linux.go:170] "Using iptables Proxier"
	I0210 10:59:53.040803       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 10:59:53.041779       1 server.go:497] "Version info" version="v1.32.1"
	I0210 10:59:53.041810       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 10:59:53.043672       1 config.go:199] "Starting service config controller"
	I0210 10:59:53.043823       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 10:59:53.043872       1 config.go:105] "Starting endpoint slice config controller"
	I0210 10:59:53.043878       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 10:59:53.044673       1 config.go:329] "Starting node config controller"
	I0210 10:59:53.044706       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 10:59:53.145005       1 shared_informer.go:320] Caches are synced for node config
	I0210 10:59:53.145058       1 shared_informer.go:320] Caches are synced for service config
	I0210 10:59:53.145075       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [25b39e8ce1a4] <==
	W0210 10:59:43.166439       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.166532       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.185428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0210 10:59:43.185484       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.306774       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 10:59:43.306820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.395654       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0210 10:59:43.396103       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0210 10:59:43.425536       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 10:59:43.426275       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.464601       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0210 10:59:43.464900       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.546333       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.546379       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.571428       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 10:59:43.571487       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.633107       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 10:59:43.633211       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 10:59:43.661207       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0210 10:59:43.661369       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 10:59:45.218303       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0210 11:12:05.101338       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-wcctx\": pod kindnet-wcctx is already assigned to node \"ha-335100-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-wcctx" node="ha-335100-m04"
	E0210 11:12:05.108346       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 3e5b3d5a-fe7f-4f9a-96d5-c1cff07cf028(kube-system/kindnet-wcctx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-wcctx"
	E0210 11:12:05.108547       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-wcctx\": pod kindnet-wcctx is already assigned to node \"ha-335100-m04\"" pod="kube-system/kindnet-wcctx"
	I0210 11:12:05.108665       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-wcctx" node="ha-335100-m04"
	
	
	==> kubelet <==
	Feb 10 11:23:45 ha-335100 kubelet[2375]: E0210 11:23:45.621387    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:23:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:23:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:23:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:23:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:24:45 ha-335100 kubelet[2375]: E0210 11:24:45.622069    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:24:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:24:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:24:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:24:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:25:45 ha-335100 kubelet[2375]: E0210 11:25:45.620640    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:25:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:25:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:25:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:25:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:26:45 ha-335100 kubelet[2375]: E0210 11:26:45.620993    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:26:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:26:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:26:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:26:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 11:27:45 ha-335100 kubelet[2375]: E0210 11:27:45.620883    2375 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 11:27:45 ha-335100 kubelet[2375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 11:27:45 ha-335100 kubelet[2375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 11:27:45 ha-335100 kubelet[2375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 11:27:45 ha-335100 kubelet[2375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-335100 -n ha-335100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-335100 -n ha-335100: (11.233558s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-335100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (153.05s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (55.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- sh -c "ping -c 1 172.29.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- sh -c "ping -c 1 172.29.128.1": exit status 1 (10.4717603s)

                                                
                                                
-- stdout --
	PING 172.29.128.1 (172.29.128.1): 56 data bytes
	
	--- 172.29.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.29.128.1) from pod (busybox-58667487b6-4g8jw): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- sh -c "ping -c 1 172.29.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- sh -c "ping -c 1 172.29.128.1": exit status 1 (10.4456538s)

                                                
                                                
-- stdout --
	PING 172.29.128.1 (172.29.128.1): 56 data bytes
	
	--- 172.29.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.29.128.1) from pod (busybox-58667487b6-8shfg): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-032400 -n multinode-032400
E0210 12:03:55.675223   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-032400 -n multinode-032400: (11.1954492s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 logs -n 25: (8.3173852s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-183500 ssh -- ls                    | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:52 UTC | 10 Feb 25 11:52 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-183500                           | mount-start-1-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:52 UTC | 10 Feb 25 11:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-183500 ssh -- ls                    | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:53 UTC | 10 Feb 25 11:53 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-183500                           | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:53 UTC | 10 Feb 25 11:53 UTC |
	| start   | -p mount-start-2-183500                           | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:53 UTC | 10 Feb 25 11:55 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:55 UTC |                     |
	|         | --profile mount-start-2-183500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-183500 ssh -- ls                    | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:55 UTC | 10 Feb 25 11:55 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-183500                           | mount-start-2-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:55 UTC | 10 Feb 25 11:56 UTC |
	| delete  | -p mount-start-1-183500                           | mount-start-1-183500 | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:56 UTC | 10 Feb 25 11:56 UTC |
	| start   | -p multinode-032400                               | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 11:56 UTC | 10 Feb 25 12:03 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- apply -f                   | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- rollout                    | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- get pods -o                | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- get pods -o                | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-4g8jw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-8shfg --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-4g8jw --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-8shfg --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-4g8jw -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-8shfg -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- get pods -o                | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-4g8jw                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC |                     |
	|         | busybox-58667487b6-4g8jw -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.128.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC | 10 Feb 25 12:03 UTC |
	|         | busybox-58667487b6-8shfg                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-032400 -- exec                       | multinode-032400     | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:03 UTC |                     |
	|         | busybox-58667487b6-8shfg -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.128.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 11:56:03
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 11:56:03.267028   11096 out.go:345] Setting OutFile to fd 1156 ...
	I0210 11:56:03.319115   11096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:56:03.319115   11096 out.go:358] Setting ErrFile to fd 1248...
	I0210 11:56:03.319115   11096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:56:03.337772   11096 out.go:352] Setting JSON to false
	I0210 11:56:03.340736   11096 start.go:129] hostinfo: {"hostname":"minikube5","uptime":189902,"bootTime":1738998660,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 11:56:03.341324   11096 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 11:56:03.346009   11096 out.go:177] * [multinode-032400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 11:56:03.351589   11096 notify.go:220] Checking for updates...
	I0210 11:56:03.354286   11096 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:56:03.356673   11096 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 11:56:03.359510   11096 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 11:56:03.363068   11096 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 11:56:03.365371   11096 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 11:56:03.368607   11096 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:56:03.368607   11096 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 11:56:08.325284   11096 out.go:177] * Using the hyperv driver based on user configuration
	I0210 11:56:08.327741   11096 start.go:297] selected driver: hyperv
	I0210 11:56:08.327741   11096 start.go:901] validating driver "hyperv" against <nil>
	I0210 11:56:08.327835   11096 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 11:56:08.369087   11096 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 11:56:08.370317   11096 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:56:08.370317   11096 cni.go:84] Creating CNI manager for ""
	I0210 11:56:08.370317   11096 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0210 11:56:08.370317   11096 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0210 11:56:08.370317   11096 start.go:340] cluster config:
	{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:56:08.370845   11096 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 11:56:08.375949   11096 out.go:177] * Starting "multinode-032400" primary control-plane node in "multinode-032400" cluster
	I0210 11:56:08.379842   11096 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:56:08.380035   11096 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 11:56:08.380065   11096 cache.go:56] Caching tarball of preloaded images
	I0210 11:56:08.380472   11096 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:56:08.380602   11096 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:56:08.380819   11096 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 11:56:08.381045   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json: {Name:mk28b1415a5eddb1a6ec74ca32fe8422ac983e55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:56:08.381678   11096 start.go:360] acquireMachinesLock for multinode-032400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:56:08.381678   11096 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-032400"
	I0210 11:56:08.381678   11096 start.go:93] Provisioning new machine with config: &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:56:08.381678   11096 start.go:125] createHost starting for "" (driver="hyperv")
	I0210 11:56:08.386119   11096 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:56:08.386937   11096 start.go:159] libmachine.API.Create for "multinode-032400" (driver="hyperv")
	I0210 11:56:08.386998   11096 client.go:168] LocalClient.Create starting
	I0210 11:56:08.387459   11096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 11:56:08.387670   11096 main.go:141] libmachine: Decoding PEM data...
	I0210 11:56:08.387729   11096 main.go:141] libmachine: Parsing certificate...
	I0210 11:56:08.387777   11096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 11:56:08.388057   11096 main.go:141] libmachine: Decoding PEM data...
	I0210 11:56:08.388107   11096 main.go:141] libmachine: Parsing certificate...
	I0210 11:56:08.388207   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:56:10.364942   11096 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:56:10.364942   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:10.365847   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:56:11.959006   11096 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:56:11.959006   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:11.959088   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:56:13.387003   11096 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:56:13.387101   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:13.387101   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:56:16.703174   11096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:56:16.703174   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:16.704625   11096 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:56:17.112654   11096 main.go:141] libmachine: Creating SSH key...
	I0210 11:56:17.199098   11096 main.go:141] libmachine: Creating VM...
	I0210 11:56:17.199098   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:56:19.775391   11096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:56:19.776327   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:19.776327   11096 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:56:19.776327   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:56:21.382412   11096 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:56:21.382412   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:21.382412   11096 main.go:141] libmachine: Creating VHD
	I0210 11:56:21.383034   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 11:56:24.916508   11096 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 0ABC0A23-727D-41CE-9EAA-092429A4B8CC
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 11:56:24.917412   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:24.917412   11096 main.go:141] libmachine: Writing magic tar header
	I0210 11:56:24.917522   11096 main.go:141] libmachine: Writing SSH key tar header
	I0210 11:56:24.930266   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 11:56:27.830156   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:27.830156   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:27.830676   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\disk.vhd' -SizeBytes 20000MB
	I0210 11:56:30.158494   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:30.158995   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:30.159069   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-032400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 11:56:33.368830   11096 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-032400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 11:56:33.368830   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:33.369187   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-032400 -DynamicMemoryEnabled $false
	I0210 11:56:35.361835   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:35.361877   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:35.362032   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-032400 -Count 2
	I0210 11:56:37.278233   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:37.278574   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:37.278721   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-032400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\boot2docker.iso'
	I0210 11:56:39.534312   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:39.534312   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:39.534312   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-032400 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\disk.vhd'
	I0210 11:56:41.877657   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:41.877657   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:41.877657   11096 main.go:141] libmachine: Starting VM...
	I0210 11:56:41.877657   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400
	I0210 11:56:44.674142   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:44.674142   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:44.674142   11096 main.go:141] libmachine: Waiting for host to start...
	I0210 11:56:44.675149   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:56:46.653212   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:56:46.654149   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:46.654247   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:56:48.897093   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:48.897093   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:49.898132   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:56:51.860041   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:56:51.860118   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:51.860190   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:56:54.200420   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:54.201048   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:55.201768   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:56:57.123994   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:56:57.124029   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:56:57.124169   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:56:59.338310   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:56:59.338310   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:00.339170   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:02.302427   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:02.302792   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:02.302792   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:04.531488   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 11:57:04.532487   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:05.532945   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:07.473077   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:07.473077   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:07.473660   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:09.868023   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:09.868023   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:09.868697   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:11.800693   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:11.801263   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:11.801263   11096 machine.go:93] provisionDockerMachine start ...
	I0210 11:57:11.801369   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:13.740217   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:13.740217   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:13.740217   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:16.058295   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:16.058295   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:16.063102   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:16.079625   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:16.079691   11096 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 11:57:16.203291   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 11:57:16.203433   11096 buildroot.go:166] provisioning hostname "multinode-032400"
	I0210 11:57:16.203534   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:18.105736   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:18.105736   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:18.105736   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:20.390047   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:20.390047   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:20.394645   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:20.394645   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:20.395166   11096 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400 && echo "multinode-032400" | sudo tee /etc/hostname
	I0210 11:57:20.550678   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400
	
	I0210 11:57:20.550678   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:22.456083   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:22.456083   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:22.456083   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:24.778606   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:24.778606   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:24.782733   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:24.783261   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:24.783261   11096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 11:57:24.928047   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 11:57:24.928149   11096 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 11:57:24.928310   11096 buildroot.go:174] setting up certificates
	I0210 11:57:24.928310   11096 provision.go:84] configureAuth start
	I0210 11:57:24.928310   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:26.887469   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:26.887469   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:26.887469   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:29.210820   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:29.211414   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:29.211626   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:31.131183   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:31.132191   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:31.132688   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:33.395744   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:33.395887   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:33.395887   11096 provision.go:143] copyHostCerts
	I0210 11:57:33.395887   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 11:57:33.395887   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 11:57:33.395887   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 11:57:33.396476   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 11:57:33.397091   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 11:57:33.397091   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 11:57:33.397091   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 11:57:33.397690   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 11:57:33.398407   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 11:57:33.398407   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 11:57:33.398407   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 11:57:33.398927   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 11:57:33.399681   11096 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400 san=[127.0.0.1 172.29.136.201 localhost minikube multinode-032400]
	I0210 11:57:33.500310   11096 provision.go:177] copyRemoteCerts
	I0210 11:57:33.507996   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 11:57:33.507996   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:35.420178   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:35.420178   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:35.420178   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:37.677982   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:37.678990   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:37.679199   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:57:37.779182   11096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2710826s)
	I0210 11:57:37.779262   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 11:57:37.779365   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 11:57:37.823310   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 11:57:37.823706   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0210 11:57:37.863844   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 11:57:37.863844   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 11:57:37.908280   11096 provision.go:87] duration metric: took 12.9798262s to configureAuth
	I0210 11:57:37.908417   11096 buildroot.go:189] setting minikube options for container-runtime
	I0210 11:57:37.909020   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:57:37.909138   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:39.790753   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:39.790753   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:39.790831   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:42.070947   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:42.071198   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:42.075485   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:42.075609   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:42.075609   11096 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 11:57:42.209398   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 11:57:42.209398   11096 buildroot.go:70] root file system type: tmpfs
	I0210 11:57:42.209933   11096 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 11:57:42.209933   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:44.175738   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:44.175738   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:44.175738   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:46.496501   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:46.496501   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:46.501241   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:46.501241   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:46.501241   11096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 11:57:46.651982   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 11:57:46.652194   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:48.576142   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:48.576142   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:48.576757   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:50.938140   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:50.938140   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:50.942981   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:57:50.943761   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:57:50.943848   11096 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 11:57:53.143794   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 11:57:53.143794   11096 machine.go:96] duration metric: took 41.3420737s to provisionDockerMachine
	I0210 11:57:53.143859   11096 client.go:171] duration metric: took 1m44.7556999s to LocalClient.Create
	I0210 11:57:53.143859   11096 start.go:167] duration metric: took 1m44.755761s to libmachine.API.Create "multinode-032400"
	I0210 11:57:53.143922   11096 start.go:293] postStartSetup for "multinode-032400" (driver="hyperv")
	I0210 11:57:53.143922   11096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 11:57:53.152026   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 11:57:53.152026   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:55.101260   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:55.101332   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:55.101417   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:57:57.413860   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:57:57.413860   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:57.414815   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:57:57.518111   11096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3660375s)
	I0210 11:57:57.530088   11096 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 11:57:57.539518   11096 command_runner.go:130] > NAME=Buildroot
	I0210 11:57:57.539640   11096 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 11:57:57.539640   11096 command_runner.go:130] > ID=buildroot
	I0210 11:57:57.539640   11096 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 11:57:57.539640   11096 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 11:57:57.539747   11096 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 11:57:57.539747   11096 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 11:57:57.540000   11096 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 11:57:57.540781   11096 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 11:57:57.540781   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 11:57:57.552027   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 11:57:57.572474   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 11:57:57.615792   11096 start.go:296] duration metric: took 4.4718211s for postStartSetup
	I0210 11:57:57.617591   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:57:59.626827   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:57:59.626827   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:57:59.626827   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:02.013614   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:02.013678   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:02.013678   11096 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 11:58:02.016148   11096 start.go:128] duration metric: took 1m53.6332115s to createHost
	I0210 11:58:02.016148   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:58:04.004362   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:58:04.004362   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:04.005319   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:06.362130   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:06.362450   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:06.366062   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:58:06.366678   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:58:06.366678   11096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 11:58:06.500665   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188686.514543609
	
	I0210 11:58:06.500665   11096 fix.go:216] guest clock: 1739188686.514543609
	I0210 11:58:06.500665   11096 fix.go:229] Guest: 2025-02-10 11:58:06.514543609 +0000 UTC Remote: 2025-02-10 11:58:02.0161482 +0000 UTC m=+118.832046401 (delta=4.498395409s)
	I0210 11:58:06.500825   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:58:08.496196   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:58:08.497026   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:08.497103   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:10.914696   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:10.914696   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:10.918238   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 11:58:10.918474   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.136.201 22 <nil> <nil>}
	I0210 11:58:10.918474   11096 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739188686
	I0210 11:58:11.063636   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 11:58:06 UTC 2025
	
	I0210 11:58:11.063680   11096 fix.go:236] clock set: Mon Feb 10 11:58:06 UTC 2025
	 (err=<nil>)
	I0210 11:58:11.063707   11096 start.go:83] releasing machines lock for "multinode-032400", held for 2m2.6806714s
	I0210 11:58:11.063707   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:58:13.004587   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:58:13.004587   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:13.004587   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:15.350107   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:15.350558   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:15.354438   11096 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 11:58:15.354604   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:58:15.361097   11096 ssh_runner.go:195] Run: cat /version.json
	I0210 11:58:15.361097   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:58:17.372485   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:58:17.372485   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:17.372660   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:17.372660   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:58:17.373181   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:17.373280   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:58:19.929538   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:19.929618   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:19.929766   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:58:19.954758   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:58:19.954758   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:58:19.955357   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:58:20.022568   11096 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 11:58:20.022568   11096 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.6679955s)
	W0210 11:58:20.023100   11096 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 11:58:20.056212   11096 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0210 11:58:20.056212   11096 ssh_runner.go:235] Completed: cat /version.json: (4.6950637s)
	I0210 11:58:20.063886   11096 ssh_runner.go:195] Run: systemctl --version
	I0210 11:58:20.074240   11096 command_runner.go:130] > systemd 252 (252)
	I0210 11:58:20.074240   11096 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0210 11:58:20.082310   11096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 11:58:20.095377   11096 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0210 11:58:20.095827   11096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 11:58:20.103824   11096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 11:58:20.137520   11096 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 11:58:20.137520   11096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 11:58:20.137520   11096 start.go:495] detecting cgroup driver to use...
	I0210 11:58:20.137520   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0210 11:58:20.141537   11096 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 11:58:20.142566   11096 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 11:58:20.181133   11096 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 11:58:20.189572   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 11:58:20.219857   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 11:58:20.244244   11096 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 11:58:20.253026   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 11:58:20.287562   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:58:20.322558   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 11:58:20.352476   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 11:58:20.383970   11096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 11:58:20.415860   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 11:58:20.448043   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 11:58:20.481779   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 11:58:20.513404   11096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 11:58:20.532952   11096 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:58:20.533242   11096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 11:58:20.551304   11096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 11:58:20.583257   11096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 11:58:20.614568   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:20.831410   11096 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 11:58:20.870689   11096 start.go:495] detecting cgroup driver to use...
	I0210 11:58:20.880073   11096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 11:58:20.904683   11096 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 11:58:20.904683   11096 command_runner.go:130] > [Unit]
	I0210 11:58:20.904683   11096 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 11:58:20.904803   11096 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 11:58:20.904803   11096 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 11:58:20.904803   11096 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 11:58:20.904803   11096 command_runner.go:130] > StartLimitBurst=3
	I0210 11:58:20.904873   11096 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 11:58:20.904914   11096 command_runner.go:130] > [Service]
	I0210 11:58:20.904914   11096 command_runner.go:130] > Type=notify
	I0210 11:58:20.904914   11096 command_runner.go:130] > Restart=on-failure
	I0210 11:58:20.905046   11096 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 11:58:20.905071   11096 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 11:58:20.905145   11096 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 11:58:20.905178   11096 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 11:58:20.905256   11096 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 11:58:20.905256   11096 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 11:58:20.905256   11096 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 11:58:20.905372   11096 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 11:58:20.905437   11096 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 11:58:20.905437   11096 command_runner.go:130] > ExecStart=
	I0210 11:58:20.905506   11096 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 11:58:20.905506   11096 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 11:58:20.905506   11096 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 11:58:20.905627   11096 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 11:58:20.905627   11096 command_runner.go:130] > LimitNOFILE=infinity
	I0210 11:58:20.905700   11096 command_runner.go:130] > LimitNPROC=infinity
	I0210 11:58:20.905731   11096 command_runner.go:130] > LimitCORE=infinity
	I0210 11:58:20.905804   11096 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 11:58:20.905835   11096 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 11:58:20.905835   11096 command_runner.go:130] > TasksMax=infinity
	I0210 11:58:20.905835   11096 command_runner.go:130] > TimeoutStartSec=0
	I0210 11:58:20.905937   11096 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 11:58:20.905937   11096 command_runner.go:130] > Delegate=yes
	I0210 11:58:20.906008   11096 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 11:58:20.906038   11096 command_runner.go:130] > KillMode=process
	I0210 11:58:20.906038   11096 command_runner.go:130] > [Install]
	I0210 11:58:20.906110   11096 command_runner.go:130] > WantedBy=multi-user.target
	I0210 11:58:20.915350   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:58:20.950061   11096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 11:58:21.007867   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 11:58:21.041744   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:58:21.079459   11096 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 11:58:21.192307   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 11:58:21.219899   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 11:58:21.256913   11096 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 11:58:21.265486   11096 ssh_runner.go:195] Run: which cri-dockerd
	I0210 11:58:21.271548   11096 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 11:58:21.280192   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 11:58:21.298875   11096 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 11:58:21.338237   11096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 11:58:21.539797   11096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 11:58:21.722479   11096 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 11:58:21.722758   11096 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 11:58:21.769882   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:21.973886   11096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:58:24.942780   11096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.9688136s)
	I0210 11:58:24.952414   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 11:58:24.985822   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:58:25.021634   11096 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 11:58:25.225262   11096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 11:58:25.421642   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:25.612754   11096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 11:58:25.651626   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 11:58:25.684676   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:25.875176   11096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 11:58:25.986180   11096 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 11:58:25.994435   11096 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 11:58:26.002727   11096 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 11:58:26.002727   11096 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 11:58:26.002727   11096 command_runner.go:130] > Device: 0,22	Inode: 903         Links: 1
	I0210 11:58:26.002727   11096 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 11:58:26.002727   11096 command_runner.go:130] > Access: 2025-02-10 11:58:25.915302802 +0000
	I0210 11:58:26.002727   11096 command_runner.go:130] > Modify: 2025-02-10 11:58:25.915302802 +0000
	I0210 11:58:26.002963   11096 command_runner.go:130] > Change: 2025-02-10 11:58:25.919302813 +0000
	I0210 11:58:26.002963   11096 command_runner.go:130] >  Birth: -
	I0210 11:58:26.003173   11096 start.go:563] Will wait 60s for crictl version
	I0210 11:58:26.011721   11096 ssh_runner.go:195] Run: which crictl
	I0210 11:58:26.020636   11096 command_runner.go:130] > /usr/bin/crictl
	I0210 11:58:26.030100   11096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 11:58:26.091164   11096 command_runner.go:130] > Version:  0.1.0
	I0210 11:58:26.091251   11096 command_runner.go:130] > RuntimeName:  docker
	I0210 11:58:26.091251   11096 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 11:58:26.091251   11096 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 11:58:26.091306   11096 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 11:58:26.099293   11096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:58:26.135155   11096 command_runner.go:130] > 27.4.0
	I0210 11:58:26.143167   11096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 11:58:26.179118   11096 command_runner.go:130] > 27.4.0
	I0210 11:58:26.206467   11096 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 11:58:26.206847   11096 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 11:58:26.211316   11096 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 11:58:26.211440   11096 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 11:58:26.211440   11096 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 11:58:26.211440   11096 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 11:58:26.214297   11096 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 11:58:26.214297   11096 ip.go:214] interface addr: 172.29.128.1/20
	I0210 11:58:26.221338   11096 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 11:58:26.229203   11096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:58:26.251510   11096 kubeadm.go:883] updating cluster {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 11:58:26.251694   11096 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:58:26.257432   11096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 11:58:26.285364   11096 docker.go:689] Got preloaded images: 
	I0210 11:58:26.285364   11096 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0210 11:58:26.295411   11096 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 11:58:26.315903   11096 command_runner.go:139] > {"Repositories":{}}
	I0210 11:58:26.327261   11096 ssh_runner.go:195] Run: which lz4
	I0210 11:58:26.333095   11096 command_runner.go:130] > /usr/bin/lz4
	I0210 11:58:26.333095   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0210 11:58:26.341343   11096 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0210 11:58:26.349924   11096 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:58:26.349924   11096 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0210 11:58:26.349924   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0210 11:58:27.760918   11096 docker.go:653] duration metric: took 1.4272796s to copy over tarball
	I0210 11:58:27.769012   11096 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0210 11:58:43.448183   11096 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (15.6789515s)
	I0210 11:58:43.448215   11096 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0210 11:58:43.513123   11096 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0210 11:58:43.531751   11096 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.1":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.1":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.1":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102
161f1ded087897a"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.1":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0210 11:58:43.532384   11096 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0210 11:58:43.575074   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:43.771057   11096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 11:58:47.496558   11096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.7254599s)
	I0210 11:58:47.505390   11096 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 11:58:47.533733   11096 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0210 11:58:47.533810   11096 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0210 11:58:47.533810   11096 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0210 11:58:47.533810   11096 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0210 11:58:47.533810   11096 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0210 11:58:47.533853   11096 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0210 11:58:47.533853   11096 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0210 11:58:47.533886   11096 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:58:47.533929   11096 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0210 11:58:47.534004   11096 cache_images.go:84] Images are preloaded, skipping loading
	I0210 11:58:47.534043   11096 kubeadm.go:934] updating node { 172.29.136.201 8443 v1.32.1 docker true true} ...
	I0210 11:58:47.534293   11096 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.136.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 11:58:47.541920   11096 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0210 11:58:47.606913   11096 command_runner.go:130] > cgroupfs
	I0210 11:58:47.607046   11096 cni.go:84] Creating CNI manager for ""
	I0210 11:58:47.607106   11096 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 11:58:47.607188   11096 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 11:58:47.607231   11096 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.136.201 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-032400 NodeName:multinode-032400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.136.201"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.136.201 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 11:58:47.607495   11096 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.136.201
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-032400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.136.201"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.136.201"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 11:58:47.616773   11096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 11:58:47.635404   11096 command_runner.go:130] > kubeadm
	I0210 11:58:47.636367   11096 command_runner.go:130] > kubectl
	I0210 11:58:47.636367   11096 command_runner.go:130] > kubelet
	I0210 11:58:47.636367   11096 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 11:58:47.645688   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 11:58:47.669373   11096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 11:58:47.700969   11096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 11:58:47.735077   11096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0210 11:58:47.773691   11096 ssh_runner.go:195] Run: grep 172.29.136.201	control-plane.minikube.internal$ /etc/hosts
	I0210 11:58:47.780122   11096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.136.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 11:58:47.810902   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:58:48.016119   11096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:58:48.048846   11096 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.136.201
	I0210 11:58:48.048846   11096 certs.go:194] generating shared ca certs ...
	I0210 11:58:48.048957   11096 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.049233   11096 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 11:58:48.049952   11096 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 11:58:48.050095   11096 certs.go:256] generating profile certs ...
	I0210 11:58:48.050689   11096 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.key
	I0210 11:58:48.050846   11096 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.crt with IP's: []
	I0210 11:58:48.190763   11096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.crt ...
	I0210 11:58:48.190763   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.crt: {Name:mk80c7e420907f082c63318f61e2f8b2e290b5e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.192620   11096 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.key ...
	I0210 11:58:48.192620   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.key: {Name:mk7aabced437a9a6dca21849bb6fc5c2e49d4e3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.192876   11096 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.dc9ffee4
	I0210 11:58:48.193896   11096 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.dc9ffee4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.136.201]
	I0210 11:58:48.436644   11096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.dc9ffee4 ...
	I0210 11:58:48.436644   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.dc9ffee4: {Name:mk4fe2bbcf097dea44ac2a8e14d1a655697a5b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.437646   11096 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.dc9ffee4 ...
	I0210 11:58:48.437646   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.dc9ffee4: {Name:mk88f8385ac34bdb8f6aea9d1d66388a67c39e57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.438239   11096 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.dc9ffee4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt
	I0210 11:58:48.453558   11096 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.dc9ffee4 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key
	I0210 11:58:48.454495   11096 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key
	I0210 11:58:48.455476   11096 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt with IP's: []
	I0210 11:58:48.696477   11096 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt ...
	I0210 11:58:48.697477   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt: {Name:mk6acb846b5eb9e48c59230aa016b5570fcef671 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.697789   11096 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key ...
	I0210 11:58:48.697789   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key: {Name:mk183778b4ad884cfe7743393e33d2cd4d09de9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:58:48.698804   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 11:58:48.698804   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 11:58:48.699803   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 11:58:48.699803   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 11:58:48.699803   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 11:58:48.699803   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 11:58:48.699803   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 11:58:48.712977   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 11:58:48.713936   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 11:58:48.714555   11096 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 11:58:48.714644   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 11:58:48.714885   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 11:58:48.715092   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 11:58:48.715259   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 11:58:48.715259   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 11:58:48.715259   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:58:48.715797   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 11:58:48.715935   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 11:58:48.716880   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 11:58:48.765165   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 11:58:48.809900   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 11:58:48.854955   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 11:58:48.902172   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 11:58:48.946005   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 11:58:48.996041   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 11:58:49.039832   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 11:58:49.081331   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 11:58:49.126440   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 11:58:49.170252   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 11:58:49.217145   11096 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 11:58:49.255766   11096 ssh_runner.go:195] Run: openssl version
	I0210 11:58:49.264151   11096 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 11:58:49.271813   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 11:58:49.298155   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:58:49.308956   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:58:49.308956   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:58:49.317020   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 11:58:49.325436   11096 command_runner.go:130] > b5213941
	I0210 11:58:49.333568   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 11:58:49.359266   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 11:58:49.388561   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 11:58:49.399427   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:58:49.399500   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 11:58:49.407097   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 11:58:49.416314   11096 command_runner.go:130] > 51391683
	I0210 11:58:49.424028   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 11:58:49.451530   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 11:58:49.479278   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 11:58:49.486325   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:58:49.486325   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 11:58:49.494274   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 11:58:49.503315   11096 command_runner.go:130] > 3ec20f2e
	I0210 11:58:49.513462   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 11:58:49.546036   11096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 11:58:49.552986   11096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:58:49.553541   11096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 11:58:49.553718   11096 kubeadm.go:392] StartCluster: {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 11:58:49.559983   11096 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 11:58:49.593555   11096 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 11:58:49.611849   11096 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0210 11:58:49.611849   11096 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0210 11:58:49.611849   11096 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0210 11:58:49.621198   11096 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 11:58:49.648473   11096 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 11:58:49.666034   11096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0210 11:58:49.666034   11096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0210 11:58:49.667030   11096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0210 11:58:49.667030   11096 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:58:49.667030   11096 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 11:58:49.667030   11096 kubeadm.go:157] found existing configuration files:
	
	I0210 11:58:49.674025   11096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 11:58:49.695562   11096 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:58:49.695623   11096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 11:58:49.703055   11096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 11:58:49.727185   11096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 11:58:49.743220   11096 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:58:49.743311   11096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 11:58:49.751174   11096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 11:58:49.779500   11096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 11:58:49.797582   11096 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:58:49.798261   11096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 11:58:49.807576   11096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 11:58:49.834001   11096 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 11:58:49.851749   11096 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:58:49.851749   11096 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 11:58:49.860514   11096 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 11:58:49.879717   11096 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0210 11:58:50.230189   11096 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:58:50.230189   11096 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 11:59:08.149649   11096 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0210 11:59:08.150413   11096 command_runner.go:130] > [init] Using Kubernetes version: v1.32.1
	I0210 11:59:08.150461   11096 kubeadm.go:310] [preflight] Running pre-flight checks
	I0210 11:59:08.150564   11096 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 11:59:08.150654   11096 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:59:08.150654   11096 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0210 11:59:08.150654   11096 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:59:08.150654   11096 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0210 11:59:08.150654   11096 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 11:59:08.150654   11096 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0210 11:59:08.151251   11096 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:59:08.151251   11096 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 11:59:08.154419   11096 out.go:235]   - Generating certificates and keys ...
	I0210 11:59:08.154456   11096 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0210 11:59:08.154456   11096 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0210 11:59:08.154456   11096 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0210 11:59:08.154456   11096 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0210 11:59:08.154456   11096 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:59:08.154456   11096 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0210 11:59:08.155116   11096 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:59:08.155116   11096 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0210 11:59:08.155116   11096 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0210 11:59:08.155116   11096 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0210 11:59:08.155116   11096 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0210 11:59:08.155116   11096 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0210 11:59:08.155116   11096 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0210 11:59:08.155116   11096 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0210 11:59:08.155894   11096 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-032400] and IPs [172.29.136.201 127.0.0.1 ::1]
	I0210 11:59:08.155894   11096 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-032400] and IPs [172.29.136.201 127.0.0.1 ::1]
	I0210 11:59:08.156038   11096 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0210 11:59:08.156038   11096 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0210 11:59:08.156038   11096 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-032400] and IPs [172.29.136.201 127.0.0.1 ::1]
	I0210 11:59:08.156038   11096 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-032400] and IPs [172.29.136.201 127.0.0.1 ::1]
	I0210 11:59:08.156038   11096 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:59:08.156038   11096 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0210 11:59:08.156608   11096 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:59:08.156608   11096 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0210 11:59:08.156824   11096 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0210 11:59:08.156858   11096 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0210 11:59:08.157038   11096 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:59:08.157038   11096 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 11:59:08.157216   11096 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:59:08.157216   11096 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 11:59:08.157351   11096 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 11:59:08.157351   11096 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 11:59:08.157351   11096 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:59:08.157351   11096 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 11:59:08.157351   11096 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:59:08.157351   11096 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 11:59:08.157351   11096 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:59:08.157351   11096 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 11:59:08.157951   11096 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:59:08.157951   11096 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 11:59:08.158094   11096 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:59:08.158094   11096 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 11:59:08.160720   11096 out.go:235]   - Booting up control plane ...
	I0210 11:59:08.161355   11096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:59:08.161355   11096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 11:59:08.161355   11096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:59:08.161355   11096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 11:59:08.161355   11096 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:59:08.161355   11096 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 11:59:08.162025   11096 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:59:08.162025   11096 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 11:59:08.162025   11096 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:59:08.162025   11096 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 11:59:08.162025   11096 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0210 11:59:08.162025   11096 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 11:59:08.162025   11096 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 11:59:08.162025   11096 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0210 11:59:08.162681   11096 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 11:59:08.162681   11096 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 11:59:08.162681   11096 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.20153ms
	I0210 11:59:08.162681   11096 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.20153ms
	I0210 11:59:08.162681   11096 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 11:59:08.162681   11096 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0210 11:59:08.163316   11096 command_runner.go:130] > [api-check] The API server is healthy after 12.502058976s
	I0210 11:59:08.163316   11096 kubeadm.go:310] [api-check] The API server is healthy after 12.502058976s
	I0210 11:59:08.163365   11096 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 11:59:08.163365   11096 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0210 11:59:08.163365   11096 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 11:59:08.163365   11096 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0210 11:59:08.163943   11096 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0210 11:59:08.163943   11096 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0210 11:59:08.163997   11096 command_runner.go:130] > [mark-control-plane] Marking the node multinode-032400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 11:59:08.163997   11096 kubeadm.go:310] [mark-control-plane] Marking the node multinode-032400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0210 11:59:08.163997   11096 kubeadm.go:310] [bootstrap-token] Using token: mqn07e.id9rcmcnsedjjne2
	I0210 11:59:08.163997   11096 command_runner.go:130] > [bootstrap-token] Using token: mqn07e.id9rcmcnsedjjne2
	I0210 11:59:08.166769   11096 out.go:235]   - Configuring RBAC rules ...
	I0210 11:59:08.166906   11096 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 11:59:08.166906   11096 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0210 11:59:08.166906   11096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 11:59:08.166906   11096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0210 11:59:08.166906   11096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 11:59:08.166906   11096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0210 11:59:08.167756   11096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 11:59:08.167756   11096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0210 11:59:08.167756   11096 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 11:59:08.167756   11096 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0210 11:59:08.167756   11096 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 11:59:08.167756   11096 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0210 11:59:08.167756   11096 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 11:59:08.167756   11096 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0210 11:59:08.167756   11096 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0210 11:59:08.167756   11096 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0210 11:59:08.167756   11096 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0210 11:59:08.168768   11096 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0210 11:59:08.168768   11096 kubeadm.go:310] 
	I0210 11:59:08.168768   11096 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0210 11:59:08.168768   11096 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0210 11:59:08.168768   11096 kubeadm.go:310] 
	I0210 11:59:08.168768   11096 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0210 11:59:08.168768   11096 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0210 11:59:08.168768   11096 kubeadm.go:310] 
	I0210 11:59:08.168768   11096 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0210 11:59:08.168768   11096 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0210 11:59:08.168768   11096 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 11:59:08.168768   11096 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0210 11:59:08.168768   11096 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 11:59:08.168768   11096 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0210 11:59:08.168768   11096 kubeadm.go:310] 
	I0210 11:59:08.168768   11096 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0210 11:59:08.168768   11096 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0210 11:59:08.168768   11096 kubeadm.go:310] 
	I0210 11:59:08.169769   11096 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 11:59:08.169769   11096 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0210 11:59:08.169769   11096 kubeadm.go:310] 
	I0210 11:59:08.169769   11096 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0210 11:59:08.169769   11096 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0210 11:59:08.169769   11096 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 11:59:08.169769   11096 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0210 11:59:08.169769   11096 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 11:59:08.169769   11096 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0210 11:59:08.169769   11096 kubeadm.go:310] 
	I0210 11:59:08.169769   11096 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0210 11:59:08.169769   11096 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0210 11:59:08.169769   11096 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0210 11:59:08.169769   11096 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0210 11:59:08.169769   11096 kubeadm.go:310] 
	I0210 11:59:08.170847   11096 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token mqn07e.id9rcmcnsedjjne2 \
	I0210 11:59:08.170847   11096 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token mqn07e.id9rcmcnsedjjne2 \
	I0210 11:59:08.170847   11096 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 \
	I0210 11:59:08.170847   11096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 \
	I0210 11:59:08.170847   11096 kubeadm.go:310] 	--control-plane 
	I0210 11:59:08.170847   11096 command_runner.go:130] > 	--control-plane 
	I0210 11:59:08.170847   11096 kubeadm.go:310] 
	I0210 11:59:08.170847   11096 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0210 11:59:08.170847   11096 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0210 11:59:08.170847   11096 kubeadm.go:310] 
	I0210 11:59:08.171516   11096 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token mqn07e.id9rcmcnsedjjne2 \
	I0210 11:59:08.171516   11096 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token mqn07e.id9rcmcnsedjjne2 \
	I0210 11:59:08.171571   11096 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 11:59:08.171767   11096 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 11:59:08.171767   11096 cni.go:84] Creating CNI manager for ""
	I0210 11:59:08.171820   11096 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0210 11:59:08.174580   11096 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 11:59:08.183831   11096 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 11:59:08.192007   11096 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0210 11:59:08.192007   11096 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0210 11:59:08.192007   11096 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0210 11:59:08.192007   11096 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0210 11:59:08.192007   11096 command_runner.go:130] > Access: 2025-02-10 11:57:09.884979500 +0000
	I0210 11:59:08.192007   11096 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0210 11:59:08.192007   11096 command_runner.go:130] > Change: 2025-02-10 11:57:00.793000000 +0000
	I0210 11:59:08.192007   11096 command_runner.go:130] >  Birth: -
	I0210 11:59:08.192007   11096 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 11:59:08.192007   11096 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 11:59:08.247698   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 11:59:08.832398   11096 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0210 11:59:08.893248   11096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0210 11:59:08.922732   11096 command_runner.go:130] > serviceaccount/kindnet created
	I0210 11:59:08.959147   11096 command_runner.go:130] > daemonset.apps/kindnet created
	I0210 11:59:08.962394   11096 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 11:59:08.972681   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-032400 minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=multinode-032400 minikube.k8s.io/primary=true
	I0210 11:59:08.973659   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:09.030416   11096 command_runner.go:130] > -16
	I0210 11:59:09.030546   11096 ops.go:34] apiserver oom_adj: -16
	I0210 11:59:09.657168   11096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0210 11:59:09.669285   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:09.701273   11096 command_runner.go:130] > node/multinode-032400 labeled
	I0210 11:59:09.854067   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:10.169721   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:10.325539   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:10.671034   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:10.794868   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:11.170949   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:11.299094   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:11.670931   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:11.932391   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:12.171584   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:12.837307   11096 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0210 11:59:12.848547   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0210 11:59:13.015619   11096 command_runner.go:130] > NAME      SECRETS   AGE
	I0210 11:59:13.015713   11096 command_runner.go:130] > default   0         1s
	I0210 11:59:13.015713   11096 kubeadm.go:1113] duration metric: took 4.0532407s to wait for elevateKubeSystemPrivileges
	I0210 11:59:13.015836   11096 kubeadm.go:394] duration metric: took 23.4618596s to StartCluster
	I0210 11:59:13.015929   11096 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:59:13.016104   11096 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:59:13.018233   11096 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 11:59:13.019400   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0210 11:59:13.019494   11096 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 11:59:13.020029   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:59:13.019941   11096 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 11:59:13.020146   11096 addons.go:69] Setting storage-provisioner=true in profile "multinode-032400"
	I0210 11:59:13.020248   11096 addons.go:69] Setting default-storageclass=true in profile "multinode-032400"
	I0210 11:59:13.020248   11096 addons.go:238] Setting addon storage-provisioner=true in "multinode-032400"
	I0210 11:59:13.020286   11096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-032400"
	I0210 11:59:13.020383   11096 host.go:66] Checking if "multinode-032400" exists ...
	I0210 11:59:13.020969   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:59:13.022298   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:59:13.053334   11096 out.go:177] * Verifying Kubernetes components...
	I0210 11:59:13.097208   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 11:59:13.960997   11096 command_runner.go:130] > apiVersion: v1
	I0210 11:59:13.960997   11096 command_runner.go:130] > data:
	I0210 11:59:13.960997   11096 command_runner.go:130] >   Corefile: |
	I0210 11:59:13.960997   11096 command_runner.go:130] >     .:53 {
	I0210 11:59:13.962005   11096 command_runner.go:130] >         errors
	I0210 11:59:13.962005   11096 command_runner.go:130] >         health {
	I0210 11:59:13.962039   11096 command_runner.go:130] >            lameduck 5s
	I0210 11:59:13.962039   11096 command_runner.go:130] >         }
	I0210 11:59:13.962039   11096 command_runner.go:130] >         ready
	I0210 11:59:13.962039   11096 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0210 11:59:13.962039   11096 command_runner.go:130] >            pods insecure
	I0210 11:59:13.962039   11096 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0210 11:59:13.962039   11096 command_runner.go:130] >            ttl 30
	I0210 11:59:13.962039   11096 command_runner.go:130] >         }
	I0210 11:59:13.962039   11096 command_runner.go:130] >         prometheus :9153
	I0210 11:59:13.962039   11096 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0210 11:59:13.962145   11096 command_runner.go:130] >            max_concurrent 1000
	I0210 11:59:13.962145   11096 command_runner.go:130] >         }
	I0210 11:59:13.962145   11096 command_runner.go:130] >         cache 30 {
	I0210 11:59:13.962145   11096 command_runner.go:130] >            disable success cluster.local
	I0210 11:59:13.962145   11096 command_runner.go:130] >            disable denial cluster.local
	I0210 11:59:13.962145   11096 command_runner.go:130] >         }
	I0210 11:59:13.962145   11096 command_runner.go:130] >         loop
	I0210 11:59:13.962145   11096 command_runner.go:130] >         reload
	I0210 11:59:13.962305   11096 command_runner.go:130] >         loadbalance
	I0210 11:59:13.962305   11096 command_runner.go:130] >     }
	I0210 11:59:13.962305   11096 command_runner.go:130] > kind: ConfigMap
	I0210 11:59:13.962305   11096 command_runner.go:130] > metadata:
	I0210 11:59:13.962305   11096 command_runner.go:130] >   creationTimestamp: "2025-02-10T11:59:07Z"
	I0210 11:59:13.962305   11096 command_runner.go:130] >   name: coredns
	I0210 11:59:13.962305   11096 command_runner.go:130] >   namespace: kube-system
	I0210 11:59:13.962305   11096 command_runner.go:130] >   resourceVersion: "252"
	I0210 11:59:13.962406   11096 command_runner.go:130] >   uid: 15d50ce9-583c-4e01-ba1c-9e5b205735d6
	I0210 11:59:13.962701   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0210 11:59:13.976977   11096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 11:59:14.535462   11096 command_runner.go:130] > configmap/coredns replaced
	I0210 11:59:14.535558   11096 start.go:971] {"host.minikube.internal": 172.29.128.1} host record injected into CoreDNS's ConfigMap
	I0210 11:59:14.535828   11096 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:59:14.535828   11096 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:59:14.535828   11096 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.136.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 11:59:14.535828   11096 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.136.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 11:59:14.535828   11096 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 11:59:14.535828   11096 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 11:59:14.535828   11096 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 11:59:14.535828   11096 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 11:59:14.535828   11096 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 11:59:14.539159   11096 node_ready.go:35] waiting up to 6m0s for node "multinode-032400" to be "Ready" ...
	I0210 11:59:14.539291   11096 type.go:168] "Request Body" body=""
	I0210 11:59:14.539474   11096 deployment.go:95] "Request Body" body=""
	I0210 11:59:14.539474   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:14.539580   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:14.539474   11096 round_trippers.go:470] GET https://172.29.136.201:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0210 11:59:14.539580   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:14.539676   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:14.539716   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:14.539716   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:14.539676   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:14.556093   11096 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 11:59:14.556167   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:14.556167   11096 round_trippers.go:587]     Audit-Id: 7f766589-99a0-4174-bb55-a4257b9c9ca5
	I0210 11:59:14.556167   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:14.556167   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:14.556167   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:14.556167   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:14.556167   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:14 GMT
	I0210 11:59:14.561422   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:14.602123   11096 round_trippers.go:581] Response Status: 200 OK in 62 milliseconds
	I0210 11:59:14.602123   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:14.602123   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:14.602123   11096 round_trippers.go:587]     Content-Length: 144
	I0210 11:59:14.602123   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:14 GMT
	I0210 11:59:14.602123   11096 round_trippers.go:587]     Audit-Id: 098f6be8-b2cf-478b-ab5d-6045ab8b8573
	I0210 11:59:14.602123   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:14.602123   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:14.602123   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:14.602123   11096 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 37 31 33  |be-system".*$713|
		00000040  62 35 64 37 38 2d 65 61  39 33 2d 34 32 65 33 2d  |b5d78-ea93-42e3-|
		00000050  38 66 37 32 2d 30 38 39  36 65 32 34 63 39 36 36  |8f72-0896e24c966|
		00000060  31 32 03 33 37 38 38 00  42 08 08 8b d4 a7 bd 06  |12.3788.B.......|
		00000070  10 00 12 02 08 02 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0210 11:59:14.602845   11096 deployment.go:111] "Request Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 37 31 33  |be-system".*$713|
		00000040  62 35 64 37 38 2d 65 61  39 33 2d 34 32 65 33 2d  |b5d78-ea93-42e3-|
		00000050  38 66 37 32 2d 30 38 39  36 65 32 34 63 39 36 36  |8f72-0896e24c966|
		00000060  31 32 03 33 37 38 38 00  42 08 08 8b d4 a7 bd 06  |12.3788.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0210 11:59:14.602845   11096 round_trippers.go:470] PUT https://172.29.136.201:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0210 11:59:14.602845   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:14.602845   11096 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:14.602845   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:14.602845   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:14.741014   11096 round_trippers.go:581] Response Status: 200 OK in 138 milliseconds
	I0210 11:59:14.741158   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:14.741158   11096 round_trippers.go:587]     Audit-Id: c26a4bf3-dbdc-4106-88ed-c3cc0351b9ab
	I0210 11:59:14.741158   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:14.741158   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:14.741158   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:14.741158   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:14.741158   11096 round_trippers.go:587]     Content-Length: 144
	I0210 11:59:14.741158   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:14 GMT
	I0210 11:59:14.742098   11096 deployment.go:111] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 37 31 33  |be-system".*$713|
		00000040  62 35 64 37 38 2d 65 61  39 33 2d 34 32 65 33 2d  |b5d78-ea93-42e3-|
		00000050  38 66 37 32 2d 30 38 39  36 65 32 34 63 39 36 36  |8f72-0896e24c966|
		00000060  31 32 03 33 38 33 38 00  42 08 08 8b d4 a7 bd 06  |12.3838.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0210 11:59:15.040757   11096 type.go:168] "Request Body" body=""
	I0210 11:59:15.040757   11096 deployment.go:95] "Request Body" body=""
	I0210 11:59:15.040757   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:15.040757   11096 round_trippers.go:470] GET https://172.29.136.201:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0210 11:59:15.040757   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:15.040757   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:15.040757   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:15.040757   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:15.040757   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:15.040757   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:15.053871   11096 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 11:59:15.054008   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:15.054008   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:15 GMT
	I0210 11:59:15.054008   11096 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 11:59:15.054008   11096 round_trippers.go:587]     Audit-Id: 83277104-3477-4016-8001-e1dcb7b678a7
	I0210 11:59:15.054008   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:15.054008   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:15.054008   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:15.054008   11096 round_trippers.go:587]     Audit-Id: 37c6a897-0fd4-46fa-a7a0-df5dedec0f7c
	I0210 11:59:15.054127   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:15.054197   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:15.054197   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:15.054197   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:15.054197   11096 round_trippers.go:587]     Content-Length: 144
	I0210 11:59:15.054127   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:15.054292   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:15.054197   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:15 GMT
	I0210 11:59:15.054454   11096 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 37 31 33  |be-system".*$713|
		00000040  62 35 64 37 38 2d 65 61  39 33 2d 34 32 65 33 2d  |b5d78-ea93-42e3-|
		00000050  38 66 37 32 2d 30 38 39  36 65 32 34 63 39 36 36  |8f72-0896e24c966|
		00000060  31 32 03 33 38 39 38 00  42 08 08 8b d4 a7 bd 06  |12.3898.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 02 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0210 11:59:15.054454   11096 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-032400" context rescaled to 1 replicas
	I0210 11:59:15.054864   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:15.194878   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:59:15.194878   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:15.195397   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:59:15.195397   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:15.196508   11096 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 11:59:15.196508   11096 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.136.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 11:59:15.197135   11096 addons.go:238] Setting addon default-storageclass=true in "multinode-032400"
	I0210 11:59:15.197663   11096 host.go:66] Checking if "multinode-032400" exists ...
	I0210 11:59:15.205782   11096 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 11:59:15.207125   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:59:15.247123   11096 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:59:15.247123   11096 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0210 11:59:15.248170   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:59:15.540138   11096 type.go:168] "Request Body" body=""
	I0210 11:59:15.540138   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:15.540138   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:15.540138   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:15.540138   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:15.553997   11096 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 11:59:15.554128   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:15.554128   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:15.554128   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:15.554128   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:15.554128   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:15.554200   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:15 GMT
	I0210 11:59:15.554200   11096 round_trippers.go:587]     Audit-Id: 6c58de57-4095-4d86-b721-f76b86056182
	I0210 11:59:15.554537   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:16.039361   11096 type.go:168] "Request Body" body=""
	I0210 11:59:16.039361   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:16.039361   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:16.039361   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:16.039361   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:16.044587   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:16.044671   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:16.044671   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:16.044671   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:16.044671   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:16 GMT
	I0210 11:59:16.044671   11096 round_trippers.go:587]     Audit-Id: e1689a36-43f5-418d-820f-4e680d91b0a5
	I0210 11:59:16.044671   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:16.044671   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:16.044671   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:16.539355   11096 type.go:168] "Request Body" body=""
	I0210 11:59:16.539355   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:16.539355   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:16.539355   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:16.539355   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:16.544806   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:16.544878   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:16.544878   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:16.544878   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:16.544878   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:16.544878   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:16 GMT
	I0210 11:59:16.544878   11096 round_trippers.go:587]     Audit-Id: f8030a57-c9cf-4a11-ab25-9a4e658db6fd
	I0210 11:59:16.544878   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:16.545448   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:16.545448   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:17.040038   11096 type.go:168] "Request Body" body=""
	I0210 11:59:17.040038   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:17.040038   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:17.040038   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:17.040038   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:17.103854   11096 round_trippers.go:581] Response Status: 200 OK in 63 milliseconds
	I0210 11:59:17.103980   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:17.103980   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:17 GMT
	I0210 11:59:17.103980   11096 round_trippers.go:587]     Audit-Id: 35f5fb37-567d-47d2-815b-eef2c1599b6e
	I0210 11:59:17.103980   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:17.103980   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:17.103980   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:17.103980   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:17.104538   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:17.289208   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:59:17.289388   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:17.289413   11096 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0210 11:59:17.289413   11096 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0210 11:59:17.289413   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 11:59:17.340459   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:59:17.340523   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:17.340523   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:59:17.539333   11096 type.go:168] "Request Body" body=""
	I0210 11:59:17.539333   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:17.539333   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:17.539333   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:17.539333   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:17.543326   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:17.543326   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:17.543326   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:17.543326   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:17 GMT
	I0210 11:59:17.543326   11096 round_trippers.go:587]     Audit-Id: a2e9a5dd-3f3b-4c15-ba5a-20206eb6fab0
	I0210 11:59:17.543326   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:17.543326   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:17.543326   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:17.543326   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:18.039649   11096 type.go:168] "Request Body" body=""
	I0210 11:59:18.039649   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:18.039649   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:18.039649   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:18.039649   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:18.044064   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:18.044140   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:18.044140   11096 round_trippers.go:587]     Audit-Id: 7bbf471d-a88a-490b-b909-a033a7e2e6e6
	I0210 11:59:18.044140   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:18.044140   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:18.044140   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:18.044140   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:18.044140   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:18 GMT
	I0210 11:59:18.044140   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:18.540376   11096 type.go:168] "Request Body" body=""
	I0210 11:59:18.540618   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:18.540618   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:18.540618   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:18.540618   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:18.551111   11096 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0210 11:59:18.551111   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:18.551111   11096 round_trippers.go:587]     Audit-Id: 4785a830-4fd7-4b65-8fb5-6b81eff95509
	I0210 11:59:18.551111   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:18.551111   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:18.551111   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:18.551111   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:18.551111   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:18 GMT
	I0210 11:59:18.551111   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:18.551111   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:19.041037   11096 type.go:168] "Request Body" body=""
	I0210 11:59:19.041037   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:19.041037   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:19.041037   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:19.041037   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:19.226340   11096 round_trippers.go:581] Response Status: 200 OK in 185 milliseconds
	I0210 11:59:19.226473   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:19.226473   11096 round_trippers.go:587]     Audit-Id: 5c296b73-19e8-4e05-94c1-8132b0052693
	I0210 11:59:19.226473   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:19.226473   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:19.226473   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:19.226473   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:19.226473   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:19 GMT
	I0210 11:59:19.226843   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:19.392708   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:59:19.393688   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:19.393688   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 11:59:19.540237   11096 type.go:168] "Request Body" body=""
	I0210 11:59:19.540237   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:19.540237   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:19.540237   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:19.540237   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:19.543691   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:19.543750   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:19.543750   11096 round_trippers.go:587]     Audit-Id: c6e3261a-e117-44a2-8932-e28294907ba4
	I0210 11:59:19.543750   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:19.543750   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:19.543814   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:19.543814   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:19.543814   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:19 GMT
	I0210 11:59:19.544407   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:19.806027   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:59:19.806027   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:19.807024   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:59:20.043492   11096 type.go:168] "Request Body" body=""
	I0210 11:59:20.044558   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:20.044558   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:20.044558   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:20.044558   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:20.058642   11096 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 11:59:20.058642   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:20.058642   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:20.058642   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:20.058642   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:20.058642   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:20.058642   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:20 GMT
	I0210 11:59:20.058642   11096 round_trippers.go:587]     Audit-Id: db2fa3d2-d223-4edd-b808-2dda1c614ed0
	I0210 11:59:20.060654   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:20.062657   11096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0210 11:59:20.540567   11096 type.go:168] "Request Body" body=""
	I0210 11:59:20.540567   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:20.540567   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:20.540567   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:20.540567   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:20.543933   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:20.543933   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:20.543933   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:20 GMT
	I0210 11:59:20.543933   11096 round_trippers.go:587]     Audit-Id: 0e11e42b-ebf6-44e9-a42d-f027b67d5e65
	I0210 11:59:20.543933   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:20.543995   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:20.543995   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:20.543995   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:20.544475   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:20.653409   11096 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0210 11:59:20.653472   11096 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0210 11:59:20.653472   11096 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0210 11:59:20.653540   11096 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0210 11:59:20.653540   11096 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0210 11:59:20.653540   11096 command_runner.go:130] > pod/storage-provisioner created
	I0210 11:59:21.039838   11096 type.go:168] "Request Body" body=""
	I0210 11:59:21.039838   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:21.039838   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:21.039838   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:21.039838   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:21.251509   11096 round_trippers.go:581] Response Status: 200 OK in 211 milliseconds
	I0210 11:59:21.251614   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:21.251694   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:21.251694   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:21.251694   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:21 GMT
	I0210 11:59:21.251694   11096 round_trippers.go:587]     Audit-Id: 0c35f5cd-63ea-43db-aea9-2839b32aa3f2
	I0210 11:59:21.251694   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:21.251694   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:21.251963   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:21.252172   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:21.540178   11096 type.go:168] "Request Body" body=""
	I0210 11:59:21.540178   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:21.540178   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:21.540178   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:21.540178   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:21.544076   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:21.544188   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:21.544188   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:21.544188   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:21 GMT
	I0210 11:59:21.544188   11096 round_trippers.go:587]     Audit-Id: eb4b3f30-fe30-40f5-ba69-8b66debf96f2
	I0210 11:59:21.544188   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:21.544188   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:21.544188   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:21.544495   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:21.946781   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 11:59:21.946781   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:21.946781   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 11:59:22.039512   11096 type.go:168] "Request Body" body=""
	I0210 11:59:22.039615   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:22.039694   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:22.039694   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:22.039754   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:22.042980   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:22.043297   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:22.043297   11096 round_trippers.go:587]     Audit-Id: 00493201-5296-4857-b205-28ddabeaf63f
	I0210 11:59:22.043368   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:22.043368   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:22.043368   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:22.043368   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:22.043368   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:22 GMT
	I0210 11:59:22.043432   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:22.083339   11096 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0210 11:59:22.539637   11096 type.go:168] "Request Body" body=""
	I0210 11:59:22.539637   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:22.539637   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:22.539637   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:22.539637   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:22.556678   11096 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0210 11:59:22.562449   11096 type.go:204] "Request Body" body=""
	I0210 11:59:22.562545   11096 round_trippers.go:470] GET https://172.29.136.201:8443/apis/storage.k8s.io/v1/storageclasses
	I0210 11:59:22.562545   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:22.562545   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:22.562545   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:22.564509   11096 round_trippers.go:581] Response Status: 200 OK in 24 milliseconds
	I0210 11:59:22.564509   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:22.564509   11096 round_trippers.go:587]     Audit-Id: cb0c746f-2efb-4c7b-a5a0-ceaa11a38780
	I0210 11:59:22.564509   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:22.564509   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:22.564509   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:22.564509   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:22.564823   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:22 GMT
	I0210 11:59:22.564934   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:22.567153   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:22.567153   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:22.567153   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:22.567153   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:22.567153   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:22.567153   11096 round_trippers.go:587]     Content-Length: 957
	I0210 11:59:22.567153   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:22 GMT
	I0210 11:59:22.567153   11096 round_trippers.go:587]     Audit-Id: a17072ae-47e8-4eec-aa48-7ee627a5b40c
	I0210 11:59:22.567153   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:22.567413   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 25 0a 11  73 74 6f 72 61 67 65 2e  |k8s..%..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 10 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 4c  69 73 74 12 8b 07 0a 09  |geClassList.....|
		00000030  0a 00 12 03 34 31 37 1a  00 12 fd 06 0a cd 06 0a  |....417.........|
		00000040  08 73 74 61 6e 64 61 72  64 12 00 1a 00 22 00 2a  |.standard....".*|
		00000050  24 30 34 31 30 33 63 37  64 2d 65 37 30 65 2d 34  |$04103c7d-e70e-4|
		00000060  65 35 39 2d 39 39 34 64  2d 39 39 32 65 32 38 64  |e59-994d-992e28d|
		00000070  35 30 32 33 37 32 03 34  31 37 38 00 42 08 08 9a  |502372.4178.B...|
		00000080  d4 a7 bd 06 10 00 5a 2f  0a 1f 61 64 64 6f 6e 6d  |......Z/..addonm|
		00000090  61 6e 61 67 65 72 2e 6b  75 62 65 72 6e 65 74 65  |anager.kubernete|
		000000a0  73 2e 69 6f 2f 6d 6f 64  65 12 0c 45 6e 73 75 72  |s.io/mode..Ensur|
		000000b0  65 45 78 69 73 74 73 62  b7 02 0a 30 6b 75 62 65  |eExistsb...0kube|
		000000c0  63 74 6c 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |ctl.kubernetes. [truncated 3713 chars]
	 >
	I0210 11:59:22.567643   11096 type.go:267] "Request Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 30  |tandard....".*$0|
		00000040  34 31 30 33 63 37 64 2d  65 37 30 65 2d 34 65 35  |4103c7d-e70e-4e5|
		00000050  39 2d 39 39 34 64 2d 39  39 32 65 32 38 64 35 30  |9-994d-992e28d50|
		00000060  32 33 37 32 03 34 31 37  38 00 42 08 08 9a d4 a7  |2372.4178.B.....|
		00000070  bd 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0210 11:59:22.567707   11096 round_trippers.go:470] PUT https://172.29.136.201:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0210 11:59:22.567707   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:22.567707   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:22.567707   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:22.567763   11096 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:22.573007   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:22.573434   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:22.573434   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:22 GMT
	I0210 11:59:22.573493   11096 round_trippers.go:587]     Audit-Id: 639bcae5-b871-4a9b-ba62-508dfb66d780
	I0210 11:59:22.573493   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:22.573493   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:22.573493   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:22.573493   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:22.573493   11096 round_trippers.go:587]     Content-Length: 939
	I0210 11:59:22.573557   11096 type.go:267] "Response Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 30  |tandard....".*$0|
		00000040  34 31 30 33 63 37 64 2d  65 37 30 65 2d 34 65 35  |4103c7d-e70e-4e5|
		00000050  39 2d 39 39 34 64 2d 39  39 32 65 32 38 64 35 30  |9-994d-992e28d50|
		00000060  32 33 37 32 03 34 31 37  38 00 42 08 08 9a d4 a7  |2372.4178.B.....|
		00000070  bd 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0210 11:59:22.792195   11096 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0210 11:59:22.808750   11096 addons.go:514] duration metric: took 9.7891489s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0210 11:59:23.039754   11096 type.go:168] "Request Body" body=""
	I0210 11:59:23.039754   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:23.039754   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:23.039754   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:23.039754   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:23.044179   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:23.044254   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:23.044254   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:23.044254   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:23.044254   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:23.044254   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:23 GMT
	I0210 11:59:23.044254   11096 round_trippers.go:587]     Audit-Id: ce228585-0332-4b7f-8c05-d7f38c4ba5e2
	I0210 11:59:23.044254   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:23.045142   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:23.539589   11096 type.go:168] "Request Body" body=""
	I0210 11:59:23.540194   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:23.540280   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:23.540280   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:23.540280   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:23.543624   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:23.543624   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:23.544005   11096 round_trippers.go:587]     Audit-Id: e59d93ea-0d4b-49a1-80d6-755e4122cba0
	I0210 11:59:23.544005   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:23.544005   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:23.544005   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:23.544005   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:23.544005   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:23 GMT
	I0210 11:59:23.544249   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:23.544480   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:24.039674   11096 type.go:168] "Request Body" body=""
	I0210 11:59:24.039674   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:24.039674   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:24.039674   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:24.039674   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:24.043887   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:24.043887   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:24.043979   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:24.043979   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:24.043979   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:24.043979   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:24.043979   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:24 GMT
	I0210 11:59:24.043979   11096 round_trippers.go:587]     Audit-Id: a8900653-7cbb-4cec-b09a-4f475f1e2a48
	I0210 11:59:24.044117   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:24.539991   11096 type.go:168] "Request Body" body=""
	I0210 11:59:24.540207   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:24.540336   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:24.540336   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:24.540336   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:24.772729   11096 round_trippers.go:581] Response Status: 200 OK in 232 milliseconds
	I0210 11:59:24.772729   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:24.772729   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:24 GMT
	I0210 11:59:24.772729   11096 round_trippers.go:587]     Audit-Id: 419c699f-e01d-4797-b2f2-263165f0a78b
	I0210 11:59:24.772729   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:24.772729   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:24.772729   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:24.772729   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:24.774022   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:25.039400   11096 type.go:168] "Request Body" body=""
	I0210 11:59:25.039400   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:25.039400   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:25.039400   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:25.039400   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:25.043997   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:25.044346   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:25.044346   11096 round_trippers.go:587]     Audit-Id: be2ce48e-1bb0-47e8-b7d8-d62dc03d1c31
	I0210 11:59:25.044346   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:25.044346   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:25.044346   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:25.044346   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:25.044346   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:25 GMT
	I0210 11:59:25.044700   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:25.539945   11096 type.go:168] "Request Body" body=""
	I0210 11:59:25.539945   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:25.539945   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:25.539945   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:25.539945   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:25.544351   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:25.544968   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:25.544968   11096 round_trippers.go:587]     Audit-Id: bf90a7fd-1fbe-468f-8dca-a6e6915283ed
	I0210 11:59:25.544968   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:25.545034   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:25.545034   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:25.545034   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:25.545034   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:25 GMT
	I0210 11:59:25.546062   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:25.546229   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:26.040062   11096 type.go:168] "Request Body" body=""
	I0210 11:59:26.040240   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:26.040240   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:26.040240   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:26.040240   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:26.048484   11096 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 11:59:26.048484   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:26.048484   11096 round_trippers.go:587]     Audit-Id: f56738bb-1dcd-48b8-ab1a-619633b5035a
	I0210 11:59:26.048484   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:26.048484   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:26.048484   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:26.048484   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:26.048484   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:26 GMT
	I0210 11:59:26.048484   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:26.540176   11096 type.go:168] "Request Body" body=""
	I0210 11:59:26.540176   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:26.540176   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:26.540176   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:26.540176   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:26.544218   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:26.544218   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:26.544218   11096 round_trippers.go:587]     Audit-Id: 1c31ef8f-a1e1-42c2-bf91-f055f31d4c6e
	I0210 11:59:26.544218   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:26.544218   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:26.544218   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:26.544218   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:26.544218   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:26 GMT
	I0210 11:59:26.544218   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:27.039557   11096 type.go:168] "Request Body" body=""
	I0210 11:59:27.039557   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:27.039557   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:27.039557   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:27.039557   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:27.056604   11096 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0210 11:59:27.057655   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:27.057655   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:27.057655   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:27.057655   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:27.057655   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:27.057655   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:27 GMT
	I0210 11:59:27.057655   11096 round_trippers.go:587]     Audit-Id: 2bfb2a29-b797-42b7-8df9-9bd012c1bc12
	I0210 11:59:27.057855   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:27.540143   11096 type.go:168] "Request Body" body=""
	I0210 11:59:27.540143   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:27.540143   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:27.540143   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:27.540143   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:27.544512   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:27.544512   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:27.544512   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:27.544512   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:27.544582   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:27 GMT
	I0210 11:59:27.544582   11096 round_trippers.go:587]     Audit-Id: aa5e9042-2f9c-4a67-9a77-5deff3029727
	I0210 11:59:27.544582   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:27.544582   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:27.545247   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:28.040542   11096 type.go:168] "Request Body" body=""
	I0210 11:59:28.040542   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:28.040542   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:28.040542   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:28.040542   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:28.049942   11096 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 11:59:28.049942   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:28.050067   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:28.050067   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:28 GMT
	I0210 11:59:28.050067   11096 round_trippers.go:587]     Audit-Id: 6150dbcd-bbad-4f9f-9914-16631d839f67
	I0210 11:59:28.050067   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:28.050067   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:28.050067   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:28.050493   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:28.050688   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:28.539721   11096 type.go:168] "Request Body" body=""
	I0210 11:59:28.539721   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:28.539721   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:28.539721   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:28.539721   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:28.666008   11096 round_trippers.go:581] Response Status: 200 OK in 126 milliseconds
	I0210 11:59:28.666092   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:28.666092   11096 round_trippers.go:587]     Audit-Id: 49bbe5e0-432d-477d-9e3b-3d650664b2b6
	I0210 11:59:28.666092   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:28.666092   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:28.666092   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:28.666092   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:28.666092   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:28 GMT
	I0210 11:59:28.666509   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:29.039441   11096 type.go:168] "Request Body" body=""
	I0210 11:59:29.039918   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:29.039997   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:29.039997   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:29.039997   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:29.044600   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:29.044600   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:29.044600   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:29.044600   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:29.044600   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:29 GMT
	I0210 11:59:29.044600   11096 round_trippers.go:587]     Audit-Id: a03940ef-e67a-4700-afba-deb8a57eaddd
	I0210 11:59:29.044600   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:29.044600   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:29.046589   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:29.540092   11096 type.go:168] "Request Body" body=""
	I0210 11:59:29.540092   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:29.540092   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:29.540092   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:29.540092   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:29.543564   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:29.543564   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:29.543564   11096 round_trippers.go:587]     Audit-Id: 852bc4c7-1b53-4820-91c1-4571fc7d3292
	I0210 11:59:29.543564   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:29.543564   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:29.543564   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:29.543564   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:29.543564   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:29 GMT
	I0210 11:59:29.545174   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:30.040065   11096 type.go:168] "Request Body" body=""
	I0210 11:59:30.040065   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:30.040065   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:30.040065   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:30.040065   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:30.049356   11096 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 11:59:30.049356   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:30.049356   11096 round_trippers.go:587]     Audit-Id: f81b9126-9bf3-4d79-9382-279d62af62e7
	I0210 11:59:30.049356   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:30.049356   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:30.049356   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:30.049356   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:30.049356   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:30 GMT
	I0210 11:59:30.049912   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:30.540315   11096 type.go:168] "Request Body" body=""
	I0210 11:59:30.540315   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:30.540315   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:30.540315   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:30.540315   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:30.544919   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:30.544946   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:30.544946   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:30.544946   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:30.544946   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:30.544946   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:30.544946   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:30 GMT
	I0210 11:59:30.544946   11096 round_trippers.go:587]     Audit-Id: e12f1f53-6075-4b5e-a48b-fe9aa95c3416
	I0210 11:59:30.545267   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:30.545503   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:31.039681   11096 type.go:168] "Request Body" body=""
	I0210 11:59:31.039681   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:31.039681   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:31.039681   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:31.039681   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:31.047757   11096 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 11:59:31.047832   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:31.047832   11096 round_trippers.go:587]     Audit-Id: 54536f9a-3e64-4899-9c0e-b7e46335275b
	I0210 11:59:31.047832   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:31.047832   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:31.047832   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:31.047832   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:31.047832   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:31 GMT
	I0210 11:59:31.048775   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:31.540354   11096 type.go:168] "Request Body" body=""
	I0210 11:59:31.540809   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:31.540809   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:31.540880   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:31.540880   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:31.544533   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:31.544533   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:31.544533   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:31.544533   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:31.544533   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:31 GMT
	I0210 11:59:31.544533   11096 round_trippers.go:587]     Audit-Id: 07199ee4-76bd-4a45-abc7-0f2081c29fd2
	I0210 11:59:31.544533   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:31.544533   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:31.544930   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:32.041039   11096 type.go:168] "Request Body" body=""
	I0210 11:59:32.041190   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:32.041190   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:32.041190   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:32.041190   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:32.045107   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:32.045107   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:32.045107   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:32.045107   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:32.045107   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:32.045107   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:32 GMT
	I0210 11:59:32.045107   11096 round_trippers.go:587]     Audit-Id: cbfc91ca-1e14-4f8f-8073-6acd99d91100
	I0210 11:59:32.045107   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:32.045638   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:32.540034   11096 type.go:168] "Request Body" body=""
	I0210 11:59:32.540034   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:32.540034   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:32.540034   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:32.540034   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:32.544912   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:32.544912   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:32.544912   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:32.544912   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:32.544912   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:32.544912   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:32 GMT
	I0210 11:59:32.544912   11096 round_trippers.go:587]     Audit-Id: ab4bbb0e-a2df-42f5-b896-8501d3c814ff
	I0210 11:59:32.544912   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:32.545376   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:32.545567   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:33.040978   11096 type.go:168] "Request Body" body=""
	I0210 11:59:33.041177   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:33.041177   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:33.041177   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:33.041177   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:33.045458   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:33.045537   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:33.045537   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:33.045537   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:33.045537   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:33.045537   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:33 GMT
	I0210 11:59:33.045537   11096 round_trippers.go:587]     Audit-Id: 258364b4-6bb8-4300-8bf5-595cb0c146b0
	I0210 11:59:33.045537   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:33.045909   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:33.540258   11096 type.go:168] "Request Body" body=""
	I0210 11:59:33.540258   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:33.540258   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:33.540258   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:33.540258   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:33.544822   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:33.544822   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:33.544822   11096 round_trippers.go:587]     Audit-Id: 336ab280-6769-44bc-befd-84c4a4918f75
	I0210 11:59:33.544822   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:33.544822   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:33.545044   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:33.545044   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:33.545044   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:33 GMT
	I0210 11:59:33.545543   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:34.040081   11096 type.go:168] "Request Body" body=""
	I0210 11:59:34.040250   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:34.040250   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:34.040334   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:34.040334   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:34.045249   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:34.045249   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:34.045249   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:34.045249   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:34.045249   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:34 GMT
	I0210 11:59:34.045249   11096 round_trippers.go:587]     Audit-Id: 77c2bde0-a39c-430e-bff8-c8a456306071
	I0210 11:59:34.045249   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:34.045249   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:34.045249   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:34.540529   11096 type.go:168] "Request Body" body=""
	I0210 11:59:34.540682   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:34.540682   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:34.540682   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:34.540682   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:34.547080   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:59:34.547080   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:34.547080   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:34 GMT
	I0210 11:59:34.547080   11096 round_trippers.go:587]     Audit-Id: 97f4af6f-b6c3-49b5-b86c-9f7b921998f4
	I0210 11:59:34.547080   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:34.547080   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:34.547080   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:34.547080   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:34.547712   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:34.547919   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:35.040064   11096 type.go:168] "Request Body" body=""
	I0210 11:59:35.040751   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:35.040751   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:35.040751   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:35.040859   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:35.045529   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:35.045618   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:35.045618   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:35.045618   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:35.045618   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:35.045618   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:35 GMT
	I0210 11:59:35.045618   11096 round_trippers.go:587]     Audit-Id: 917992e4-c93b-4499-9ee0-d5f1a0fc5ea2
	I0210 11:59:35.045618   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:35.047165   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:35.540107   11096 type.go:168] "Request Body" body=""
	I0210 11:59:35.540544   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:35.540544   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:35.540544   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:35.540616   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:35.545024   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:35.545024   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:35.545024   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:35.545024   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:35.545024   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:35.545024   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:35.545024   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:35 GMT
	I0210 11:59:35.545024   11096 round_trippers.go:587]     Audit-Id: d5565d41-f1f7-4b7d-b27a-8c415362bb1a
	I0210 11:59:35.546462   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:36.040253   11096 type.go:168] "Request Body" body=""
	I0210 11:59:36.040253   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:36.040253   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:36.040253   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:36.040253   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:36.044720   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:36.044720   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:36.044720   11096 round_trippers.go:587]     Audit-Id: d03e0727-444c-4148-89f8-2beb30b3a74f
	I0210 11:59:36.044720   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:36.044720   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:36.044720   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:36.044720   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:36.044720   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:36 GMT
	I0210 11:59:36.045899   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:36.541052   11096 type.go:168] "Request Body" body=""
	I0210 11:59:36.541129   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:36.541129   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:36.541129   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:36.541206   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:36.544890   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:36.545722   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:36.545722   11096 round_trippers.go:587]     Audit-Id: bf288d2c-d583-4f45-9d2d-c00569faefcf
	I0210 11:59:36.545722   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:36.545722   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:36.545722   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:36.545722   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:36.545722   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:36 GMT
	I0210 11:59:36.546443   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:37.039767   11096 type.go:168] "Request Body" body=""
	I0210 11:59:37.040365   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:37.040365   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:37.040365   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:37.040365   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:37.045099   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:37.045099   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:37.045099   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:37 GMT
	I0210 11:59:37.045099   11096 round_trippers.go:587]     Audit-Id: 7e607dc0-c1bc-49dd-8269-eec9e2349be0
	I0210 11:59:37.045099   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:37.045099   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:37.045099   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:37.045099   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:37.045547   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:37.045771   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:37.540245   11096 type.go:168] "Request Body" body=""
	I0210 11:59:37.540397   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:37.540397   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:37.540397   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:37.540397   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:37.545182   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:37.545269   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:37.545269   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:37.545269   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:37 GMT
	I0210 11:59:37.545332   11096 round_trippers.go:587]     Audit-Id: 9ecb9efe-b706-492e-a9ab-2e1d288b9f4b
	I0210 11:59:37.545332   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:37.545332   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:37.545332   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:37.545658   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:38.040464   11096 type.go:168] "Request Body" body=""
	I0210 11:59:38.040464   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:38.040464   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:38.040464   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:38.040464   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:38.044328   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:38.044328   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:38.044328   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:38 GMT
	I0210 11:59:38.044328   11096 round_trippers.go:587]     Audit-Id: 53879003-7516-474b-9a6a-9cb323681275
	I0210 11:59:38.044328   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:38.044328   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:38.044328   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:38.044328   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:38.044955   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 bc 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 33 33  35 38 00 42 08 08 86 d4  |1b262.3358.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20925 chars]
	 >
	I0210 11:59:38.540682   11096 type.go:168] "Request Body" body=""
	I0210 11:59:38.540682   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:38.540682   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:38.540682   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:38.540682   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:38.544876   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:38.545384   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:38.545384   11096 round_trippers.go:587]     Audit-Id: bc5774f3-a8de-4010-ba64-016d85203fba
	I0210 11:59:38.545384   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:38.545384   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:38.545384   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:38.545384   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:38.545384   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:38 GMT
	I0210 11:59:38.545700   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 23 0a d4 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..#.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 32  39 38 00 42 08 08 86 d4  |1b262.4298.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21641 chars]
	 >
	I0210 11:59:39.039819   11096 type.go:168] "Request Body" body=""
	I0210 11:59:39.039819   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:39.039819   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:39.039819   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:39.039819   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:39.044172   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:39.044172   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:39.044172   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:39.044172   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:39.044172   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:39 GMT
	I0210 11:59:39.044172   11096 round_trippers.go:587]     Audit-Id: 0da3a3c9-98e9-49ef-acdf-5f9d91c99537
	I0210 11:59:39.044172   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:39.045090   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:39.045775   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 23 0a d4 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..#.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 32  39 38 00 42 08 08 86 d4  |1b262.4298.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21641 chars]
	 >
	I0210 11:59:39.046043   11096 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 11:59:39.540264   11096 type.go:168] "Request Body" body=""
	I0210 11:59:39.540264   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:39.540264   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:39.540264   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:39.540264   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:39.545158   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:39.545158   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:39.545158   11096 round_trippers.go:587]     Audit-Id: ae85e88a-c572-496a-bd03-dd3babbfee40
	I0210 11:59:39.545158   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:39.545158   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:39.545233   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:39.545233   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:39.545233   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:39 GMT
	I0210 11:59:39.545808   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 23 0a d4 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..#.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 32  39 38 00 42 08 08 86 d4  |1b262.4298.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21641 chars]
	 >
	I0210 11:59:40.041047   11096 type.go:168] "Request Body" body=""
	I0210 11:59:40.041393   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:40.041393   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:40.041393   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:40.041393   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:40.045738   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:40.045814   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:40.045814   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:40.045814   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:40.045880   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:40.045880   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:40.045880   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:40 GMT
	I0210 11:59:40.045880   11096 round_trippers.go:587]     Audit-Id: 931dab78-91fe-4974-9769-9c62e6950ec3
	I0210 11:59:40.046180   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 23 0a d4 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..#.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 32  39 38 00 42 08 08 86 d4  |1b262.4298.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21641 chars]
	 >
	I0210 11:59:40.539626   11096 type.go:168] "Request Body" body=""
	I0210 11:59:40.539626   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:40.539626   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:40.539626   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:40.539626   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:40.543914   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:40.543914   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:40.543914   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:40.543914   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:40.543914   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:40.543914   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:40.543914   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:40 GMT
	I0210 11:59:40.543914   11096 round_trippers.go:587]     Audit-Id: d47aaf9b-4dd0-432f-aa1f-2cb565531fbb
	I0210 11:59:40.544237   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 23 0a d4 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..#.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 32  39 38 00 42 08 08 86 d4  |1b262.4298.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21641 chars]
	 >
	I0210 11:59:41.039984   11096 type.go:168] "Request Body" body=""
	I0210 11:59:41.039984   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:41.039984   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:41.039984   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:41.039984   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:41.145608   11096 round_trippers.go:581] Response Status: 200 OK in 105 milliseconds
	I0210 11:59:41.145608   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:41.145608   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:41.145608   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:41.145608   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:41.145608   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:41.145608   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:41 GMT
	I0210 11:59:41.145608   11096 round_trippers.go:587]     Audit-Id: b1fb4eae-12a5-4460-bc4f-246770a01e9f
	I0210 11:59:41.146086   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:41.146086   11096 node_ready.go:49] node "multinode-032400" has status "Ready":"True"
	I0210 11:59:41.146086   11096 node_ready.go:38] duration metric: took 26.6065879s for node "multinode-032400" to be "Ready" ...
	I0210 11:59:41.146086   11096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:59:41.146086   11096 type.go:204] "Request Body" body=""
	I0210 11:59:41.146086   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 11:59:41.146086   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:41.146086   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:41.146086   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:41.355331   11096 round_trippers.go:581] Response Status: 200 OK in 209 milliseconds
	I0210 11:59:41.355331   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:41.355331   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:41.355331   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:41 GMT
	I0210 11:59:41.355331   11096 round_trippers.go:587]     Audit-Id: 3b585719-8cef-427b-9606-f65d93a32ce2
	I0210 11:59:41.355331   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:41.355331   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:41.355331   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:41.357088   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9f b2 02 0a  09 0a 00 12 03 34 33 35  |ist..........435|
		00000020  1a 00 12 90 1d 0a e0 13  0a 18 63 6f 72 65 64 6e  |..........coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 77 38 72  |s-668d6bf9bc-w8r|
		00000040  72 39 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |r9..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  65 34 35 61 33 37 62 66  |stem".*$e45a37bf|
		00000070  2d 65 37 64 61 2d 34 31  32 39 2d 62 62 37 65 2d  |-e7da-4129-bb7e-|
		00000080  38 62 65 37 64 62 65 39  33 65 30 39 32 03 34 33  |8be7dbe93e092.43|
		00000090  34 38 00 42 08 08 92 d4  a7 bd 06 10 00 5a 13 0a  |48.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 192678 chars]
	 >
	I0210 11:59:41.357352   11096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:41.357352   11096 type.go:168] "Request Body" body=""
	I0210 11:59:41.357876   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:41.357876   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:41.357953   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:41.357953   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:42.947660   11096 round_trippers.go:581] Response Status: 200 OK in 1589 milliseconds
	I0210 11:59:42.947660   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:42.947660   11096 round_trippers.go:587]     Content-Length: 3750
	I0210 11:59:42.947660   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:42 GMT
	I0210 11:59:42.947660   11096 round_trippers.go:587]     Audit-Id: 68f432ef-3029-4412-9c86-8d48525abf5b
	I0210 11:59:42.947660   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:42.947660   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:42.947660   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:42.947660   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:42.947660   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  90 1d 0a e0 13 0a 18 63  6f 72 65 64 6e 73 2d 36  |.......coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 34 38 00  |7dbe93e092.4348.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 17531 chars]
	 >
	I0210 11:59:42.947660   11096 type.go:168] "Request Body" body=""
	I0210 11:59:42.948318   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:42.948318   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:42.948318   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:42.948318   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.155925   11096 round_trippers.go:581] Response Status: 200 OK in 207 milliseconds
	I0210 11:59:43.156454   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.156454   11096 round_trippers.go:587]     Audit-Id: ff58e186-9adf-4cac-91ee-2a8afa248c14
	I0210 11:59:43.156454   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.156454   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.156454   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.156454   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.156454   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.158650   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:43.159069   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.159188   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:43.159188   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.159253   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.159253   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.163542   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:43.163542   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.164005   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.164005   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.164005   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.164005   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.164005   11096 round_trippers.go:587]     Audit-Id: f4da86b1-0647-4283-ad7e-86d1fcf14c2a
	I0210 11:59:43.164005   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.164087   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d9 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 38 38 00  |7dbe93e092.4388.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23544 chars]
	 >
	I0210 11:59:43.164087   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.164087   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:43.164087   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.164087   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.164087   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.167002   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:43.167477   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.167477   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.167477   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.167477   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.167477   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.167477   11096 round_trippers.go:587]     Audit-Id: 0180b704-f4ae-4226-ab0c-ee4cdfc9b6d5
	I0210 11:59:43.167477   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.167731   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:43.358544   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.359028   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:43.359028   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.359148   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.359148   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.363664   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:43.363777   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.363777   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.363777   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.363777   11096 round_trippers.go:587]     Audit-Id: dd813609-a68c-41aa-9213-829313019dcf
	I0210 11:59:43.363777   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.363777   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.363777   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.364089   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d9 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 38 38 00  |7dbe93e092.4388.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23544 chars]
	 >
	I0210 11:59:43.364300   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.364300   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:43.364300   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.364300   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.364300   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.366916   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:43.367696   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.367696   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.367696   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.367696   11096 round_trippers.go:587]     Audit-Id: e783b494-9536-46d3-abb6-46b51b7ae52f
	I0210 11:59:43.367696   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.367696   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.367696   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.368055   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:43.368200   11096 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 11:59:43.858218   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.858838   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:43.858838   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.858838   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.858838   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.862928   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:43.862928   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.863033   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.863033   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.863033   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.863033   11096 round_trippers.go:587]     Audit-Id: dfc76f16-c551-4660-b799-cc8668330046
	I0210 11:59:43.863033   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.863033   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.863271   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d9 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 38 38 00  |7dbe93e092.4388.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23544 chars]
	 >
	I0210 11:59:43.863982   11096 type.go:168] "Request Body" body=""
	I0210 11:59:43.864010   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:43.864010   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:43.864010   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:43.864010   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:43.867192   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:43.867192   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:43.867192   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:43.867192   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:43.867192   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:43.867192   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:43.867192   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:43 GMT
	I0210 11:59:43.867192   11096 round_trippers.go:587]     Audit-Id: 731029fd-3b11-478d-95dc-c4c673537ae3
	I0210 11:59:43.867192   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:44.358055   11096 type.go:168] "Request Body" body=""
	I0210 11:59:44.358055   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:44.358055   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:44.358055   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:44.358055   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:44.361882   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:44.361882   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:44.361882   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:44.361882   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:44 GMT
	I0210 11:59:44.361882   11096 round_trippers.go:587]     Audit-Id: 64592f22-8d81-4d80-b656-7e954ab81adb
	I0210 11:59:44.361882   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:44.361882   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:44.361882   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:44.362583   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d9 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 38 38 00  |7dbe93e092.4388.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23544 chars]
	 >
	I0210 11:59:44.362583   11096 type.go:168] "Request Body" body=""
	I0210 11:59:44.362583   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:44.362583   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:44.362583   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:44.362583   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:44.366010   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:44.366010   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:44.366010   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:44.366010   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:44.366010   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:44 GMT
	I0210 11:59:44.366010   11096 round_trippers.go:587]     Audit-Id: f2b00ad8-cc34-4887-82e8-e2ead9bc7d1a
	I0210 11:59:44.366010   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:44.366010   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:44.366010   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:44.858148   11096 type.go:168] "Request Body" body=""
	I0210 11:59:44.858148   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:44.858148   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:44.858148   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:44.858148   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:44.863167   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:44.863167   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:44.863167   11096 round_trippers.go:587]     Audit-Id: 8bf3fc2d-624d-47c3-8d87-9e532a46a570
	I0210 11:59:44.863167   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:44.863167   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:44.863167   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:44.863167   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:44.863167   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:44 GMT
	I0210 11:59:44.864149   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d9 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 33 38 38 00  |7dbe93e092.4388.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23544 chars]
	 >
	I0210 11:59:44.864149   11096 type.go:168] "Request Body" body=""
	I0210 11:59:44.864149   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:44.864149   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:44.864149   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:44.864149   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:44.867140   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:44.867478   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:44.867478   11096 round_trippers.go:587]     Audit-Id: 1c24aed1-1b34-48dc-b170-5bd5e3e6a0ff
	I0210 11:59:44.867478   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:44.867478   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:44.867478   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:44.867478   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:44.867478   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:44 GMT
	I0210 11:59:44.867836   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:45.357569   11096 type.go:168] "Request Body" body=""
	I0210 11:59:45.358145   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:45.358145   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:45.358145   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:45.358211   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:45.361960   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:45.361988   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:45.361988   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:45.361988   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:45.361988   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:45 GMT
	I0210 11:59:45.361988   11096 round_trippers.go:587]     Audit-Id: 5ce2f9d4-619d-4a16-af8b-870bb0913503
	I0210 11:59:45.361988   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:45.361988   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:45.362808   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a e8 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 34 38 38 00  |7dbe93e092.4488.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 25040 chars]
	 >
	I0210 11:59:45.362808   11096 type.go:168] "Request Body" body=""
	I0210 11:59:45.363459   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:45.363459   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:45.363459   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:45.363459   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:45.366520   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:45.366520   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:45.366520   11096 round_trippers.go:587]     Audit-Id: 31337752-8084-4ec5-9079-a94ab5485cb4
	I0210 11:59:45.366520   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:45.366520   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:45.366595   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:45.366595   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:45.366595   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:45 GMT
	I0210 11:59:45.367128   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:45.858220   11096 type.go:168] "Request Body" body=""
	I0210 11:59:45.858220   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:45.858220   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:45.858220   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:45.858220   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:45.862481   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:45.862481   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:45.862481   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:45.862481   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:45 GMT
	I0210 11:59:45.862481   11096 round_trippers.go:587]     Audit-Id: 06749ab4-f7a7-48f9-a451-bfeb45d54a03
	I0210 11:59:45.862481   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:45.862481   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:45.862481   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:45.863522   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a e8 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 34 38 38 00  |7dbe93e092.4488.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 25040 chars]
	 >
	I0210 11:59:45.863829   11096 type.go:168] "Request Body" body=""
	I0210 11:59:45.863893   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:45.863893   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:45.863969   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:45.863969   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:45.867270   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:45.867270   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:45.867270   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:45.867270   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:45 GMT
	I0210 11:59:45.867270   11096 round_trippers.go:587]     Audit-Id: df4dac08-cf80-4da3-b975-6ebe42f37630
	I0210 11:59:45.867270   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:45.867270   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:45.867270   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:45.867571   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:45.867571   11096 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 11:59:46.357459   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.357834   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 11:59:46.357932   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.357932   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.357932   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.853955   11096 round_trippers.go:581] Response Status: 200 OK in 495 milliseconds
	I0210 11:59:46.854026   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:46.854026   11096 round_trippers.go:587]     Audit-Id: 47d8aa39-c1fc-48d0-8638-94b9301a4434
	I0210 11:59:46.854026   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:46.854026   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:46.854026   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:46.854026   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:46.854026   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:46 GMT
	I0210 11:59:46.854556   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d2 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 35 33 38 00  |7dbe93e092.4538.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24169 chars]
	 >
	I0210 11:59:46.854792   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.854935   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:46.854969   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.855014   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.855039   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.956927   11096 round_trippers.go:581] Response Status: 200 OK in 101 milliseconds
	I0210 11:59:46.956927   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:46.956927   11096 round_trippers.go:587]     Audit-Id: 5c8e148d-dfdb-4d1f-be67-f667fbd38f42
	I0210 11:59:46.956927   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:46.956927   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:46.956927   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:46.956927   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:46.956927   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:46 GMT
	I0210 11:59:46.956927   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:46.957888   11096 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:46.957888   11096 pod_ready.go:82] duration metric: took 5.6004737s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:46.957888   11096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:46.957888   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.957888   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 11:59:46.957888   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.957888   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.957888   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.971895   11096 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 11:59:46.972329   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:46.972329   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:46 GMT
	I0210 11:59:46.972329   11096 round_trippers.go:587]     Audit-Id: 7e3b18f6-4c88-49ee-aa9c-98b5bfcb4f0c
	I0210 11:59:46.972329   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:46.972329   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:46.972329   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:46.972406   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:46.973758   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ab 2b 0a 9e 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 33  |kube-system".*$3|
		00000040  34 64 62 31 34 36 63 2d  65 30 39 64 2d 34 39 35  |4db146c-e09d-495|
		00000050  39 2d 38 33 32 35 2d 64  34 34 35 33 64 66 63 66  |9-8325-d4453dfcf|
		00000060  64 36 32 32 03 34 30 31  38 00 42 08 08 8b d4 a7  |d622.4018.B.....|
		00000070  bd 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4f  |.control-planebO|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26532 chars]
	 >
	I0210 11:59:46.973975   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.974149   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:46.974149   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.974149   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.974149   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.989443   11096 round_trippers.go:581] Response Status: 200 OK in 15 milliseconds
	I0210 11:59:46.989443   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:46.989443   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:46.989443   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:46.989443   11096 round_trippers.go:587]     Audit-Id: bd726693-04a3-4567-afba-439ecab5b497
	I0210 11:59:46.989443   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:46.989443   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:46.990454   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:46.990454   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:46.990454   11096 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:46.990454   11096 pod_ready.go:82] duration metric: took 32.5662ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:46.990454   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:46.990454   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.990454   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 11:59:46.990454   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.990454   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.990454   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.996548   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:59:46.996548   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:46.996548   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:46.996656   11096 round_trippers.go:587]     Audit-Id: f3622cfe-7b0e-421e-9594-92c5754bc338
	I0210 11:59:46.996656   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:46.996656   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:46.996656   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:46.996656   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:46.996790   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  8f 34 0a ae 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 37 61 33 35 34 37 32  |ystem".*$7a35472|
		00000050  64 2d 64 37 63 30 2d 34  63 37 64 2d 61 35 62 31  |d-d7c0-4c7d-a5b1|
		00000060  2d 65 30 39 34 33 37 30  61 66 31 63 32 32 03 33  |-e094370af1c22.3|
		00000070  39 38 38 00 42 08 08 88  d4 a7 bd 06 10 00 5a 1b  |988.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 56 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebV.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 32066 chars]
	 >
	I0210 11:59:46.996790   11096 type.go:168] "Request Body" body=""
	I0210 11:59:46.996790   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:46.997342   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:46.997342   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:46.997342   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:46.999394   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:47.000276   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.000276   11096 round_trippers.go:587]     Audit-Id: 79e2994f-0c80-4286-84e4-99bcf64af32f
	I0210 11:59:47.000276   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.000276   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.000276   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.000276   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.000276   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.000717   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:47.000717   11096 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:47.000717   11096 pod_ready.go:82] duration metric: took 10.2628ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.000717   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.000717   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.000717   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 11:59:47.000717   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.000717   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.000717   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.004495   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:47.004558   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.004558   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.004558   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.004558   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.004558   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.004558   11096 round_trippers.go:587]     Audit-Id: 9bc6cce7-e66f-4563-aa0d-4a628492663a
	I0210 11:59:47.004558   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.004971   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f0 30 0a 9a 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 03  33 39 30 38 00 42 08 08  |9fb4412.3908.B..|
		00000080  8b d4 a7 bd 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30013 chars]
	 >
	I0210 11:59:47.004971   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.004971   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:47.004971   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.004971   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.004971   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.010193   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:47.010615   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.010615   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.010615   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.010615   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.010615   11096 round_trippers.go:587]     Audit-Id: d3302dcb-338e-46e1-a2ef-1cde6040e5a9
	I0210 11:59:47.010615   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.010615   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.010904   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:47.011026   11096 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:47.011026   11096 pod_ready.go:82] duration metric: took 10.3082ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.011096   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.011096   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.011096   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 11:59:47.011096   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.011096   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.011096   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.013794   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 11:59:47.014488   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.014488   11096 round_trippers.go:587]     Audit-Id: 281a49bf-7705-4852-a1a9-7a247aa8ba1a
	I0210 11:59:47.014488   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.014488   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.014488   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.014488   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.014488   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.015417   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a2 25 0a c0 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 03 34 30 36 38 00  |e42713cf92.4068.|
		00000070  42 08 08 92 d4 a7 bd 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  35 36 36 64 37 62 39 66  |n-hash..566d7b9f|
		000000a0  38 35 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |85Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22668 chars]
	 >
	I0210 11:59:47.015417   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.015417   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:47.015417   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.015417   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.015417   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.019029   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:47.019029   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.019029   11096 round_trippers.go:587]     Audit-Id: cb087575-fbc3-42d4-9904-6fea2122996e
	I0210 11:59:47.019094   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.019094   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.019094   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.019094   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.019094   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.019687   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:47.019744   11096 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:47.019744   11096 pod_ready.go:82] duration metric: took 8.6481ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.019744   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.019744   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.019744   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 11:59:47.019744   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.019744   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.019744   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.023873   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 11:59:47.023873   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.023873   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.023873   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.023873   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.023873   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.023953   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.023953   11096 round_trippers.go:587]     Audit-Id: 1a39501e-2558-4a85-bba0-01fca0c861d5
	I0210 11:59:47.023953   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  fb 22 0a 82 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 03 33  |-18dafc6e44802.3|
		00000070  33 34 38 00 42 08 08 88  d4 a7 bd 06 10 00 5a 1b  |348.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21239 chars]
	 >
	I0210 11:59:47.023953   11096 type.go:168] "Request Body" body=""
	I0210 11:59:47.055490   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 11:59:47.055490   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.055490   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.055490   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.059869   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:47.059869   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.059869   11096 round_trippers.go:587]     Audit-Id: 25893394-c93e-4dac-8595-0add54cd54af
	I0210 11:59:47.059869   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.059869   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.059869   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.059869   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.059869   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.060261   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 11:59:47.060261   11096 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 11:59:47.060261   11096 pod_ready.go:82] duration metric: took 40.5162ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 11:59:47.060261   11096 pod_ready.go:39] duration metric: took 5.91411s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 11:59:47.060261   11096 api_server.go:52] waiting for apiserver process to appear ...
	I0210 11:59:47.068087   11096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:59:47.096746   11096 command_runner.go:130] > 2209
	I0210 11:59:47.096913   11096 api_server.go:72] duration metric: took 34.0768689s to wait for apiserver process to appear ...
	I0210 11:59:47.096981   11096 api_server.go:88] waiting for apiserver healthz status ...
	I0210 11:59:47.097028   11096 api_server.go:253] Checking apiserver healthz at https://172.29.136.201:8443/healthz ...
	I0210 11:59:47.109447   11096 api_server.go:279] https://172.29.136.201:8443/healthz returned 200:
	ok
	I0210 11:59:47.109863   11096 discovery_client.go:658] "Request Body" body=""
	I0210 11:59:47.109970   11096 round_trippers.go:470] GET https://172.29.136.201:8443/version
	I0210 11:59:47.110014   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.110014   11096 round_trippers.go:480]     Accept: application/json, */*
	I0210 11:59:47.110054   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.115620   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 11:59:47.115695   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.115695   11096 round_trippers.go:587]     Content-Length: 263
	I0210 11:59:47.115756   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.115756   11096 round_trippers.go:587]     Audit-Id: fc364ccc-2981-4868-97c0-e4850a2dd2b0
	I0210 11:59:47.115756   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.115756   11096 round_trippers.go:587]     Content-Type: application/json
	I0210 11:59:47.115809   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.115809   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.115895   11096 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.1",
		  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
		  "gitTreeState": "clean",
		  "buildDate": "2025-01-15T14:31:55Z",
		  "goVersion": "go1.23.4",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0210 11:59:47.115924   11096 api_server.go:141] control plane version: v1.32.1
	I0210 11:59:47.115924   11096 api_server.go:131] duration metric: took 18.9427ms to wait for apiserver health ...
	I0210 11:59:47.115924   11096 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 11:59:47.115924   11096 type.go:204] "Request Body" body=""
	I0210 11:59:47.255280   11096 request.go:661] Waited for 139.354ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 11:59:47.255280   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 11:59:47.255280   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.255280   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.255280   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.261637   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:59:47.261717   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.261717   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.261717   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.261717   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.261717   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.261717   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.261717   11096 round_trippers.go:587]     Audit-Id: 45e49d21-0acb-4118-95ea-9d665763fe9b
	I0210 11:59:47.263494   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ab c7 02 0a  09 0a 00 12 03 34 35 39  |ist..........459|
		00000020  1a 00 12 d2 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 77 38 72  |s-668d6bf9bc-w8r|
		00000040  72 39 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |r9..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  65 34 35 61 33 37 62 66  |stem".*$e45a37bf|
		00000070  2d 65 37 64 61 2d 34 31  32 39 2d 62 62 37 65 2d  |-e7da-4129-bb7e-|
		00000080  38 62 65 37 64 62 65 39  33 65 30 39 32 03 34 35  |8be7dbe93e092.45|
		00000090  33 38 00 42 08 08 92 d4  a7 bd 06 10 00 5a 13 0a  |38.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 206025 chars]
	 >
	I0210 11:59:47.263965   11096 system_pods.go:59] 8 kube-system pods found
	I0210 11:59:47.264035   11096 system_pods.go:61] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 11:59:47.264097   11096 system_pods.go:61] "etcd-multinode-032400" [34db146c-e09d-4959-8325-d4453dfcfd62] Running
	I0210 11:59:47.264097   11096 system_pods.go:61] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 11:59:47.264097   11096 system_pods.go:61] "kube-apiserver-multinode-032400" [7a35472d-d7c0-4c7d-a5b1-e094370af1c2] Running
	I0210 11:59:47.264097   11096 system_pods.go:61] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 11:59:47.264153   11096 system_pods.go:61] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 11:59:47.264153   11096 system_pods.go:61] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 11:59:47.264153   11096 system_pods.go:61] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 11:59:47.264153   11096 system_pods.go:74] duration metric: took 148.2275ms to wait for pod list to return data ...
	I0210 11:59:47.264153   11096 default_sa.go:34] waiting for default service account to be created ...
	I0210 11:59:47.264290   11096 type.go:204] "Request Body" body=""
	I0210 11:59:47.455606   11096 request.go:661] Waited for 191.314ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:59:47.455606   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/default/serviceaccounts
	I0210 11:59:47.455606   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.455606   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.455606   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.462179   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 11:59:47.462179   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.462179   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.462179   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.462179   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.462289   11096 round_trippers.go:587]     Content-Length: 128
	I0210 11:59:47.462289   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.462289   11096 round_trippers.go:587]     Audit-Id: d08603a9-b0d8-4489-8c97-edc244907b2f
	I0210 11:59:47.462289   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.462289   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5c  |iceAccountList.\|
		00000020  0a 09 0a 00 12 03 34 35  39 1a 00 12 4f 0a 4d 0a  |......459...O.M.|
		00000030  07 64 65 66 61 75 6c 74  12 00 1a 07 64 65 66 61  |.default....defa|
		00000040  75 6c 74 22 00 2a 24 34  61 64 66 62 64 33 35 2d  |ult".*$4adfbd35-|
		00000050  66 38 62 36 2d 34 36 30  66 2d 38 38 65 39 2d 65  |f8b6-460f-88e9-e|
		00000060  37 34 63 34 36 62 30 32  66 30 65 32 03 33 33 36  |74c46b02f0e2.336|
		00000070  38 00 42 08 08 90 d4 a7  bd 06 10 00 1a 00 22 00  |8.B...........".|
	 >
	I0210 11:59:47.462447   11096 default_sa.go:45] found service account: "default"
	I0210 11:59:47.462447   11096 default_sa.go:55] duration metric: took 198.2918ms for default service account to be created ...
	I0210 11:59:47.462533   11096 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 11:59:47.462611   11096 type.go:204] "Request Body" body=""
	I0210 11:59:47.655450   11096 request.go:661] Waited for 192.8371ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 11:59:47.655450   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 11:59:47.655450   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.655450   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.655450   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.659706   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:47.660002   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.660002   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.660002   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.660002   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.660002   11096 round_trippers.go:587]     Audit-Id: d1999667-61ee-46a8-812d-1edc29255eb1
	I0210 11:59:47.660002   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.660002   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.661627   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ab c7 02 0a  09 0a 00 12 03 34 35 39  |ist..........459|
		00000020  1a 00 12 d2 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 77 38 72  |s-668d6bf9bc-w8r|
		00000040  72 39 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |r9..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  65 34 35 61 33 37 62 66  |stem".*$e45a37bf|
		00000070  2d 65 37 64 61 2d 34 31  32 39 2d 62 62 37 65 2d  |-e7da-4129-bb7e-|
		00000080  38 62 65 37 64 62 65 39  33 65 30 39 32 03 34 35  |8be7dbe93e092.45|
		00000090  33 38 00 42 08 08 92 d4  a7 bd 06 10 00 5a 13 0a  |38.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 206025 chars]
	 >
	I0210 11:59:47.662293   11096 system_pods.go:86] 8 kube-system pods found
	I0210 11:59:47.662293   11096 system_pods.go:89] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 11:59:47.662293   11096 system_pods.go:89] "etcd-multinode-032400" [34db146c-e09d-4959-8325-d4453dfcfd62] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "kube-apiserver-multinode-032400" [7a35472d-d7c0-4c7d-a5b1-e094370af1c2] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 11:59:47.662369   11096 system_pods.go:89] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 11:59:47.662369   11096 system_pods.go:126] duration metric: took 199.834ms to wait for k8s-apps to be running ...
	I0210 11:59:47.662369   11096 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 11:59:47.671360   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:59:47.698950   11096 system_svc.go:56] duration metric: took 36.4974ms WaitForService to wait for kubelet
	I0210 11:59:47.698950   11096 kubeadm.go:582] duration metric: took 34.6789838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 11:59:47.698950   11096 node_conditions.go:102] verifying NodePressure condition ...
	I0210 11:59:47.698950   11096 type.go:204] "Request Body" body=""
	I0210 11:59:47.855849   11096 request.go:661] Waited for 156.8974ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/nodes
	I0210 11:59:47.856415   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes
	I0210 11:59:47.856415   11096 round_trippers.go:476] Request Headers:
	I0210 11:59:47.856415   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 11:59:47.856415   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 11:59:47.861527   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 11:59:47.861527   11096 round_trippers.go:584] Response Headers:
	I0210 11:59:47.861527   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 11:59:47.861527   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 11:59:47.861527   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 11:59:47.861527   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 11:59:47 GMT
	I0210 11:59:47.861527   11096 round_trippers.go:587]     Audit-Id: a6d504e1-b6cd-4c81-a92a-1a96c71dc32b
	I0210 11:59:47.861527   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 11:59:47.861964   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 e6 22 0a  09 0a 00 12 03 34 35 39  |List.."......459|
		00000020  1a 00 12 d8 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 30 33 32 34  30 30 12 00 1a 00 22 00  |ode-032400....".|
		00000040  2a 24 61 30 38 30 31 35  65 66 2d 65 35 32 30 2d  |*$a08015ef-e520-|
		00000050  34 31 63 62 2d 61 65 61  30 2d 31 64 39 63 38 31  |41cb-aea0-1d9c81|
		00000060  65 30 31 62 32 36 32 03  34 33 32 38 00 42 08 08  |e01b262.4328.B..|
		00000070  86 d4 a7 bd 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 21160 chars]
	 >
	I0210 11:59:47.862174   11096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 11:59:47.862174   11096 node_conditions.go:123] node cpu capacity is 2
	I0210 11:59:47.862296   11096 node_conditions.go:105] duration metric: took 163.3439ms to run NodePressure ...
	I0210 11:59:47.862296   11096 start.go:241] waiting for startup goroutines ...
	I0210 11:59:47.862296   11096 start.go:246] waiting for cluster config update ...
	I0210 11:59:47.862356   11096 start.go:255] writing updated cluster config ...
	I0210 11:59:47.900723   11096 out.go:201] 
	I0210 11:59:47.949705   11096 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:59:47.950047   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:59:47.950125   11096 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 11:59:47.998930   11096 out.go:177] * Starting "multinode-032400-m02" worker node in "multinode-032400" cluster
	I0210 11:59:48.048103   11096 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 11:59:48.049056   11096 cache.go:56] Caching tarball of preloaded images
	I0210 11:59:48.049319   11096 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 11:59:48.049319   11096 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 11:59:48.049847   11096 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 11:59:48.052315   11096 start.go:360] acquireMachinesLock for multinode-032400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 11:59:48.052539   11096 start.go:364] duration metric: took 157.5µs to acquireMachinesLock for "multinode-032400-m02"
	I0210 11:59:48.052697   11096 start.go:93] Provisioning new machine with config: &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountStrin
g:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 11:59:48.052843   11096 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0210 11:59:48.094982   11096 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0210 11:59:48.095867   11096 start.go:159] libmachine.API.Create for "multinode-032400" (driver="hyperv")
	I0210 11:59:48.095867   11096 client.go:168] LocalClient.Create starting
	I0210 11:59:48.096593   11096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0210 11:59:48.096855   11096 main.go:141] libmachine: Decoding PEM data...
	I0210 11:59:48.096855   11096 main.go:141] libmachine: Parsing certificate...
	I0210 11:59:48.097066   11096 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0210 11:59:48.097267   11096 main.go:141] libmachine: Decoding PEM data...
	I0210 11:59:48.097267   11096 main.go:141] libmachine: Parsing certificate...
	I0210 11:59:48.097425   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0210 11:59:49.893750   11096 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0210 11:59:49.894056   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:49.894128   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0210 11:59:51.498567   11096 main.go:141] libmachine: [stdout =====>] : False
	
	I0210 11:59:51.498567   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:51.498644   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 11:59:52.931032   11096 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 11:59:52.931032   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:52.931032   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:59:56.373594   11096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:59:56.373594   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:56.375502   11096 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0210 11:59:56.774266   11096 main.go:141] libmachine: Creating SSH key...
	I0210 11:59:56.922992   11096 main.go:141] libmachine: Creating VM...
	I0210 11:59:56.922992   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0210 11:59:59.633582   11096 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0210 11:59:59.633582   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:59:59.634201   11096 main.go:141] libmachine: Using switch "Default Switch"
	I0210 11:59:59.634250   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0210 12:00:01.272288   11096 main.go:141] libmachine: [stdout =====>] : True
	
	I0210 12:00:01.272288   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:01.272288   11096 main.go:141] libmachine: Creating VHD
	I0210 12:00:01.272531   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0210 12:00:04.923988   11096 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : D397997D-E32D-47C4-84E4-B506E6B04DE9
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0210 12:00:04.923988   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:04.923988   11096 main.go:141] libmachine: Writing magic tar header
	I0210 12:00:04.923988   11096 main.go:141] libmachine: Writing SSH key tar header
	I0210 12:00:04.936931   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0210 12:00:07.968740   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:07.968796   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:07.968843   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\disk.vhd' -SizeBytes 20000MB
	I0210 12:00:10.548844   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:10.548844   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:10.549262   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-032400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0210 12:00:15.109313   11096 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-032400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0210 12:00:15.109313   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:15.109313   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-032400-m02 -DynamicMemoryEnabled $false
	I0210 12:00:17.189806   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:17.189806   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:17.190062   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-032400-m02 -Count 2
	I0210 12:00:19.207021   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:19.207021   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:19.207766   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-032400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\boot2docker.iso'
	I0210 12:00:21.743096   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:21.743096   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:21.743416   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-032400-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\disk.vhd'
	I0210 12:00:25.131371   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:25.131661   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:25.131661   11096 main.go:141] libmachine: Starting VM...
	I0210 12:00:25.131661   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400-m02
	I0210 12:00:28.922069   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:28.922069   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:28.922069   11096 main.go:141] libmachine: Waiting for host to start...
	I0210 12:00:28.922069   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:31.009325   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:31.009977   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:31.009977   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:00:33.301821   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:33.301821   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:34.302986   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:36.292296   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:36.292296   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:36.293248   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:00:38.607813   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:38.607813   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:39.608141   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:41.619746   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:41.619746   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:41.619746   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:00:43.920893   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:43.920893   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:44.921540   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:46.942215   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:46.942462   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:46.942462   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:00:49.240434   11096 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:00:49.240434   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:50.241472   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:52.250967   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:52.251643   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:52.251643   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:00:54.766851   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:00:54.766851   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:54.767166   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:56.756583   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:56.756583   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:56.756583   11096 machine.go:93] provisionDockerMachine start ...
	I0210 12:00:56.757057   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:00:58.767925   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:00:58.767925   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:00:58.768774   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:01.146324   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:01.146324   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:01.150238   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:01.165576   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:01.165576   11096 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:01:01.311255   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 12:01:01.311255   11096 buildroot.go:166] provisioning hostname "multinode-032400-m02"
	I0210 12:01:01.311359   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:03.244518   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:03.244518   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:03.244518   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:05.575325   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:05.575517   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:05.579378   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:05.579916   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:05.579916   11096 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400-m02 && echo "multinode-032400-m02" | sudo tee /etc/hostname
	I0210 12:01:05.740401   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400-m02
	
	I0210 12:01:05.740401   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:07.682265   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:07.682348   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:07.682348   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:10.053925   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:10.053925   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:10.057793   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:10.057856   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:10.057856   11096 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:01:10.210391   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:01:10.210478   11096 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 12:01:10.210529   11096 buildroot.go:174] setting up certificates
	I0210 12:01:10.210529   11096 provision.go:84] configureAuth start
	I0210 12:01:10.210620   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:12.198786   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:12.198786   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:12.198862   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:14.630405   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:14.630405   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:14.630405   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:16.683471   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:16.683471   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:16.683556   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:19.052878   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:19.053365   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:19.053365   11096 provision.go:143] copyHostCerts
	I0210 12:01:19.053525   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 12:01:19.053760   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 12:01:19.053833   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 12:01:19.054202   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 12:01:19.055076   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 12:01:19.055251   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 12:01:19.055251   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 12:01:19.055607   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 12:01:19.056414   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 12:01:19.056517   11096 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 12:01:19.056640   11096 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 12:01:19.056933   11096 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 12:01:19.057906   11096 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400-m02 san=[127.0.0.1 172.29.143.51 localhost minikube multinode-032400-m02]
	I0210 12:01:19.287894   11096 provision.go:177] copyRemoteCerts
	I0210 12:01:19.295572   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:01:19.295572   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:21.359935   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:21.359935   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:21.360011   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:23.705636   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:23.705636   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:23.705636   11096 sshutil.go:53] new ssh client: &{IP:172.29.143.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:01:23.813340   11096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5177182s)
	I0210 12:01:23.813340   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 12:01:23.814077   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0210 12:01:23.860703   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 12:01:23.860849   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 12:01:23.906796   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 12:01:23.907324   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:01:23.951627   11096 provision.go:87] duration metric: took 13.740946s to configureAuth
	I0210 12:01:23.951627   11096 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:01:23.952630   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:01:23.952630   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:25.931621   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:25.931621   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:25.931699   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:28.321163   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:28.321163   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:28.323232   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:28.323232   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:28.323232   11096 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 12:01:28.464080   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 12:01:28.464080   11096 buildroot.go:70] root file system type: tmpfs
	I0210 12:01:28.464252   11096 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 12:01:28.464351   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:30.457041   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:30.457118   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:30.457189   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:32.886886   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:32.886886   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:32.890648   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:32.890846   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:32.890846   11096 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.136.201"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 12:01:33.055150   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.136.201
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 12:01:33.055150   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:35.062421   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:35.062421   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:35.062828   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:37.463885   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:37.464789   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:37.468825   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:37.469359   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:37.469359   11096 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 12:01:40.335526   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 12:01:40.336102   11096 machine.go:96] duration metric: took 43.5784637s to provisionDockerMachine
	I0210 12:01:40.336102   11096 client.go:171] duration metric: took 1m52.2390002s to LocalClient.Create
	I0210 12:01:40.336171   11096 start.go:167] duration metric: took 1m52.239069s to libmachine.API.Create "multinode-032400"
	I0210 12:01:40.336171   11096 start.go:293] postStartSetup for "multinode-032400-m02" (driver="hyperv")
	I0210 12:01:40.336171   11096 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:01:40.344737   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:01:40.344737   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:42.291876   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:42.291876   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:42.291876   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:44.689566   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:44.689566   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:44.689566   11096 sshutil.go:53] new ssh client: &{IP:172.29.143.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:01:44.796422   11096 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4516367s)
	I0210 12:01:44.804584   11096 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:01:44.812215   11096 command_runner.go:130] > NAME=Buildroot
	I0210 12:01:44.812215   11096 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 12:01:44.812215   11096 command_runner.go:130] > ID=buildroot
	I0210 12:01:44.812215   11096 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 12:01:44.812215   11096 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 12:01:44.812215   11096 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:01:44.812215   11096 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 12:01:44.812215   11096 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 12:01:44.813344   11096 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 12:01:44.813344   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 12:01:44.821324   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 12:01:44.838928   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 12:01:44.885671   11096 start.go:296] duration metric: took 4.5494502s for postStartSetup
	I0210 12:01:44.888280   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:46.885983   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:46.886061   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:46.886061   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:49.286303   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:49.287108   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:49.287161   11096 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:01:49.289275   11096 start.go:128] duration metric: took 2m1.2350082s to createHost
	I0210 12:01:49.289275   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:51.243432   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:51.244341   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:51.244341   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:53.694199   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:53.694199   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:53.698373   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:53.698804   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:53.698804   11096 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:01:53.847576   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739188913.861804757
	
	I0210 12:01:53.847576   11096 fix.go:216] guest clock: 1739188913.861804757
	I0210 12:01:53.847576   11096 fix.go:229] Guest: 2025-02-10 12:01:53.861804757 +0000 UTC Remote: 2025-02-10 12:01:49.2892752 +0000 UTC m=+346.102673401 (delta=4.572529557s)
	I0210 12:01:53.847576   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:01:55.814178   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:01:55.814178   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:55.814897   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:01:58.227787   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:01:58.227787   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:01:58.231757   11096 main.go:141] libmachine: Using SSH client type: native
	I0210 12:01:58.232350   11096 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.143.51 22 <nil> <nil>}
	I0210 12:01:58.232350   11096 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739188913
	I0210 12:01:58.387015   11096 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 12:01:53 UTC 2025
	
	I0210 12:01:58.387015   11096 fix.go:236] clock set: Mon Feb 10 12:01:53 UTC 2025
	 (err=<nil>)
	I0210 12:01:58.387015   11096 start.go:83] releasing machines lock for "multinode-032400-m02", held for 2m10.3330419s
	I0210 12:01:58.387275   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:02:00.370266   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:02:00.370266   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:00.370266   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:02:02.734872   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:02:02.734872   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:02.737547   11096 out.go:177] * Found network options:
	I0210 12:02:02.739768   11096 out.go:177]   - NO_PROXY=172.29.136.201
	W0210 12:02:02.742251   11096 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:02:02.744136   11096 out.go:177]   - NO_PROXY=172.29.136.201
	W0210 12:02:02.746550   11096 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 12:02:02.748346   11096 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:02:02.749854   11096 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 12:02:02.749854   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:02:02.756847   11096 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:02:02.756847   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:02:04.761924   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:02:04.761924   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:04.761995   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:02:04.762747   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:02:04.762747   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:04.762843   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:02:07.165895   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:02:07.165895   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:07.167154   11096 sshutil.go:53] new ssh client: &{IP:172.29.143.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:02:07.198515   11096 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:02:07.198515   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:07.199515   11096 sshutil.go:53] new ssh client: &{IP:172.29.143.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:02:07.260697   11096 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0210 12:02:07.261681   11096 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5047848s)
	W0210 12:02:07.261681   11096 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:02:07.268678   11096 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:02:07.273983   11096 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 12:02:07.274064   11096 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5241604s)
	W0210 12:02:07.274135   11096 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 12:02:07.308824   11096 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 12:02:07.308966   11096 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:02:07.309030   11096 start.go:495] detecting cgroup driver to use...
	I0210 12:02:07.309238   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:02:07.345994   11096 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 12:02:07.354909   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0210 12:02:07.382136   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0210 12:02:07.387132   11096 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 12:02:07.387775   11096 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 12:02:07.405831   11096 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:02:07.414147   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:02:07.441075   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:02:07.468066   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:02:07.495062   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:02:07.523063   11096 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:02:07.551063   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:02:07.578842   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:02:07.608374   11096 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:02:07.637326   11096 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:02:07.655327   11096 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:02:07.656373   11096 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:02:07.666329   11096 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:02:07.708941   11096 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:02:07.739905   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:07.950265   11096 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:02:07.985227   11096 start.go:495] detecting cgroup driver to use...
	I0210 12:02:07.994782   11096 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 12:02:08.016862   11096 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 12:02:08.016862   11096 command_runner.go:130] > [Unit]
	I0210 12:02:08.016862   11096 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 12:02:08.016862   11096 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 12:02:08.016862   11096 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 12:02:08.016862   11096 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 12:02:08.016862   11096 command_runner.go:130] > StartLimitBurst=3
	I0210 12:02:08.016862   11096 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 12:02:08.016862   11096 command_runner.go:130] > [Service]
	I0210 12:02:08.016862   11096 command_runner.go:130] > Type=notify
	I0210 12:02:08.016862   11096 command_runner.go:130] > Restart=on-failure
	I0210 12:02:08.016862   11096 command_runner.go:130] > Environment=NO_PROXY=172.29.136.201
	I0210 12:02:08.016862   11096 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 12:02:08.016862   11096 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 12:02:08.016862   11096 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 12:02:08.016862   11096 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 12:02:08.016862   11096 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 12:02:08.016862   11096 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 12:02:08.016862   11096 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 12:02:08.016862   11096 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 12:02:08.016862   11096 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 12:02:08.016862   11096 command_runner.go:130] > ExecStart=
	I0210 12:02:08.016862   11096 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 12:02:08.016862   11096 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 12:02:08.016862   11096 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 12:02:08.016862   11096 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 12:02:08.016862   11096 command_runner.go:130] > LimitNOFILE=infinity
	I0210 12:02:08.016862   11096 command_runner.go:130] > LimitNPROC=infinity
	I0210 12:02:08.016862   11096 command_runner.go:130] > LimitCORE=infinity
	I0210 12:02:08.016862   11096 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 12:02:08.016862   11096 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 12:02:08.016862   11096 command_runner.go:130] > TasksMax=infinity
	I0210 12:02:08.016862   11096 command_runner.go:130] > TimeoutStartSec=0
	I0210 12:02:08.016862   11096 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 12:02:08.016862   11096 command_runner.go:130] > Delegate=yes
	I0210 12:02:08.016862   11096 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 12:02:08.016862   11096 command_runner.go:130] > KillMode=process
	I0210 12:02:08.016862   11096 command_runner.go:130] > [Install]
	I0210 12:02:08.016862   11096 command_runner.go:130] > WantedBy=multi-user.target
	I0210 12:02:08.024805   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:02:08.053040   11096 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:02:08.153122   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:02:08.185116   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:02:08.219123   11096 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:02:08.413684   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:02:08.437415   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:02:08.471748   11096 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 12:02:08.479911   11096 ssh_runner.go:195] Run: which cri-dockerd
	I0210 12:02:08.486605   11096 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 12:02:08.494761   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 12:02:08.513319   11096 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 12:02:08.555166   11096 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 12:02:08.752904   11096 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 12:02:08.948339   11096 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 12:02:08.948339   11096 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 12:02:08.993495   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:09.189271   11096 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 12:02:12.798219   11096 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.6089081s)
	I0210 12:02:12.806205   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 12:02:12.837625   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:02:12.868675   11096 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 12:02:13.060600   11096 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 12:02:13.246737   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:13.433444   11096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 12:02:13.475082   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:02:13.507082   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:13.693070   11096 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 12:02:13.802189   11096 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 12:02:13.810047   11096 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 12:02:13.820928   11096 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 12:02:13.821050   11096 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 12:02:13.821050   11096 command_runner.go:130] > Device: 0,22	Inode: 871         Links: 1
	I0210 12:02:13.821050   11096 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 12:02:13.821124   11096 command_runner.go:130] > Access: 2025-02-10 12:02:13.733884748 +0000
	I0210 12:02:13.821157   11096 command_runner.go:130] > Modify: 2025-02-10 12:02:13.733884748 +0000
	I0210 12:02:13.821157   11096 command_runner.go:130] > Change: 2025-02-10 12:02:13.737884769 +0000
	I0210 12:02:13.821157   11096 command_runner.go:130] >  Birth: -
	I0210 12:02:13.821157   11096 start.go:563] Will wait 60s for crictl version
	I0210 12:02:13.828302   11096 ssh_runner.go:195] Run: which crictl
	I0210 12:02:13.834680   11096 command_runner.go:130] > /usr/bin/crictl
	I0210 12:02:13.841829   11096 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:02:13.892684   11096 command_runner.go:130] > Version:  0.1.0
	I0210 12:02:13.892684   11096 command_runner.go:130] > RuntimeName:  docker
	I0210 12:02:13.892754   11096 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 12:02:13.892754   11096 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 12:02:13.895053   11096 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 12:02:13.902353   11096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:02:13.934920   11096 command_runner.go:130] > 27.4.0
	I0210 12:02:13.941907   11096 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:02:13.973910   11096 command_runner.go:130] > 27.4.0
	I0210 12:02:13.979760   11096 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 12:02:13.981984   11096 out.go:177]   - env NO_PROXY=172.29.136.201
	I0210 12:02:13.984325   11096 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 12:02:13.988426   11096 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 12:02:13.988426   11096 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 12:02:13.988426   11096 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 12:02:13.988426   11096 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 12:02:13.991319   11096 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 12:02:13.991319   11096 ip.go:214] interface addr: 172.29.128.1/20
	I0210 12:02:13.999853   11096 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 12:02:14.007039   11096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:02:14.029606   11096 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:02:14.030267   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:02:14.030766   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:02:15.991673   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:02:15.991858   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:15.991858   11096 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:02:15.992544   11096 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.143.51
	I0210 12:02:15.992621   11096 certs.go:194] generating shared ca certs ...
	I0210 12:02:15.992621   11096 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:02:15.993144   11096 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 12:02:15.993382   11096 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 12:02:15.993382   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 12:02:15.993382   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 12:02:15.993950   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 12:02:15.994055   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 12:02:15.994382   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 12:02:15.994638   11096 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 12:02:15.994732   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 12:02:15.994888   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 12:02:15.995078   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 12:02:15.995266   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 12:02:15.995810   11096 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 12:02:15.995949   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 12:02:15.996041   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:02:15.996180   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 12:02:15.996311   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:02:16.049661   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:02:16.101604   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:02:16.153445   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:02:16.206752   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 12:02:16.252616   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:02:16.297379   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 12:02:16.349166   11096 ssh_runner.go:195] Run: openssl version
	I0210 12:02:16.358072   11096 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 12:02:16.366137   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 12:02:16.394156   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 12:02:16.401255   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:02:16.401255   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:02:16.409299   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 12:02:16.418300   11096 command_runner.go:130] > 3ec20f2e
	I0210 12:02:16.426106   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 12:02:16.454120   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:02:16.481688   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:02:16.488520   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:02:16.488520   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:02:16.495492   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:02:16.505187   11096 command_runner.go:130] > b5213941
	I0210 12:02:16.513752   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:02:16.541773   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 12:02:16.578562   11096 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 12:02:16.585653   11096 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:02:16.585653   11096 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:02:16.593769   11096 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 12:02:16.602094   11096 command_runner.go:130] > 51391683
	I0210 12:02:16.609870   11096 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 12:02:16.637740   11096 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:02:16.643663   11096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:02:16.643957   11096 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:02:16.644197   11096 kubeadm.go:934] updating node {m02 172.29.143.51 8443 v1.32.1 docker false true} ...
	I0210 12:02:16.644388   11096 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.143.51
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:02:16.652192   11096 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:02:16.669468   11096 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	I0210 12:02:16.669647   11096 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0210 12:02:16.677983   11096 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0210 12:02:16.696249   11096 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0210 12:02:16.696249   11096 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0210 12:02:16.696249   11096 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0210 12:02:16.696322   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 12:02:16.696322   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 12:02:16.706833   11096 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0210 12:02:16.706833   11096 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0210 12:02:16.707821   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:02:16.712832   11096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 12:02:16.713815   11096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0210 12:02:16.714028   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0210 12:02:16.748337   11096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 12:02:16.748337   11096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0210 12:02:16.748337   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0210 12:02:16.748337   11096 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 12:02:16.757337   11096 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0210 12:02:16.811148   11096 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 12:02:16.823763   11096 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0210 12:02:16.823763   11096 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0210 12:02:17.683147   11096 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0210 12:02:17.700724   11096 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0210 12:02:17.730962   11096 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:02:17.770968   11096 ssh_runner.go:195] Run: grep 172.29.136.201	control-plane.minikube.internal$ /etc/hosts
	I0210 12:02:17.777082   11096 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.136.201	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:02:17.808304   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:17.991543   11096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:02:18.020065   11096 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:02:18.020854   11096 start.go:317] joinCluster: &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\
jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:02:18.021127   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 12:02:18.021152   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:02:20.028462   11096 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:02:20.029350   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:20.029444   11096 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:02:22.402530   11096 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 12:02:22.402530   11096 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:02:22.403447   11096 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:02:22.954378   11096 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q06g65.781z1lmeie29fovf --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 12:02:22.957880   11096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9366742s)
	I0210 12:02:22.957989   11096 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:02:22.958055   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q06g65.781z1lmeie29fovf --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02"
	I0210 12:02:23.142795   11096 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:02:24.488107   11096 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 12:02:24.488107   11096 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0210 12:02:24.488107   11096 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002132011s
	I0210 12:02:24.488107   11096 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0210 12:02:24.488107   11096 command_runner.go:130] > This node has joined the cluster:
	I0210 12:02:24.488107   11096 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0210 12:02:24.488107   11096 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0210 12:02:24.488107   11096 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0210 12:02:24.488107   11096 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q06g65.781z1lmeie29fovf --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02": (1.5299932s)
	I0210 12:02:24.488107   11096 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 12:02:24.710662   11096 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0210 12:02:24.905402   11096 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-032400-m02 minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=multinode-032400 minikube.k8s.io/primary=false
	I0210 12:02:25.025840   11096 command_runner.go:130] > node/multinode-032400-m02 labeled
	I0210 12:02:25.028033   11096 start.go:319] duration metric: took 7.0070596s to joinCluster
	I0210 12:02:25.028187   11096 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:02:25.028236   11096 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:02:25.030604   11096 out.go:177] * Verifying Kubernetes components...
	I0210 12:02:25.042377   11096 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:02:25.261487   11096 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:02:25.297153   11096 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:02:25.297153   11096 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.136.201:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:02:25.298348   11096 node_ready.go:35] waiting up to 6m0s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:02:25.298348   11096 type.go:168] "Request Body" body=""
	I0210 12:02:25.298348   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:25.298348   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:25.298348   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:25.298348   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:25.312142   11096 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0210 12:02:25.312187   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:25.312187   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:25.312187   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:25.312222   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:25.312222   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:25.312222   11096 round_trippers.go:587]     Content-Length: 2719
	I0210 12:02:25.312222   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:25 GMT
	I0210 12:02:25.312222   11096 round_trippers.go:587]     Audit-Id: a5d854fe-e756-4485-8981-9aa824fc8f6b
	I0210 12:02:25.312355   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 31 38 00 42  |b7a9af0e2.6118.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0210 12:02:25.798453   11096 type.go:168] "Request Body" body=""
	I0210 12:02:25.799038   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:25.799038   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:25.799038   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:25.799038   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:25.802796   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:25.802796   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:25.802796   11096 round_trippers.go:587]     Audit-Id: 0e18a87b-1cc6-44a9-9de9-a5a69d0e4946
	I0210 12:02:25.802796   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:25.802796   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:25.802933   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:25.802933   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:25.802933   11096 round_trippers.go:587]     Content-Length: 2719
	I0210 12:02:25.802933   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:25 GMT
	I0210 12:02:25.803040   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 31 38 00 42  |b7a9af0e2.6118.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0210 12:02:26.298827   11096 type.go:168] "Request Body" body=""
	I0210 12:02:26.299513   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:26.299513   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:26.299513   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:26.299513   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:26.303503   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:26.303503   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:26.303503   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:26.303503   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:26.303503   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:26.303503   11096 round_trippers.go:587]     Content-Length: 2719
	I0210 12:02:26.303503   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:26 GMT
	I0210 12:02:26.303503   11096 round_trippers.go:587]     Audit-Id: 1cfbe258-a90f-463b-b06a-2ecaf7e61c0f
	I0210 12:02:26.303503   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:26.303503   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 31 38 00 42  |b7a9af0e2.6118.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0210 12:02:26.798807   11096 type.go:168] "Request Body" body=""
	I0210 12:02:26.798807   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:26.798807   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:26.798807   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:26.798807   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:26.860882   11096 round_trippers.go:581] Response Status: 200 OK in 62 milliseconds
	I0210 12:02:26.860882   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:26.860882   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:26.860882   11096 round_trippers.go:587]     Content-Length: 2719
	I0210 12:02:26.860882   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:26 GMT
	I0210 12:02:26.860882   11096 round_trippers.go:587]     Audit-Id: f3015818-e532-47fc-8477-a1aebd06e517
	I0210 12:02:26.860882   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:26.860882   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:26.860882   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:26.861259   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 88 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 31 38 00 42  |b7a9af0e2.6118.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12405 chars]
	 >
	I0210 12:02:27.298709   11096 type.go:168] "Request Body" body=""
	I0210 12:02:27.298709   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:27.298709   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:27.298709   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:27.298709   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:27.303219   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:27.303219   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:27.303219   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:27 GMT
	I0210 12:02:27.303219   11096 round_trippers.go:587]     Audit-Id: e14984bb-98f8-4dc2-b2aa-68011dd388a2
	I0210 12:02:27.303219   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:27.303219   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:27.303219   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:27.303341   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:27.303341   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:27.303516   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:27.303682   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:27.799186   11096 type.go:168] "Request Body" body=""
	I0210 12:02:27.799186   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:27.799186   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:27.799186   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:27.799186   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:28.135848   11096 round_trippers.go:581] Response Status: 200 OK in 335 milliseconds
	I0210 12:02:28.135848   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:28.135848   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:28 GMT
	I0210 12:02:28.135925   11096 round_trippers.go:587]     Audit-Id: 9bcd0a53-2a6a-44f3-9c18-4b6d6dfd6f89
	I0210 12:02:28.135925   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:28.135925   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:28.136027   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:28.136027   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:28.136027   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:28.136176   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:28.298762   11096 type.go:168] "Request Body" body=""
	I0210 12:02:28.298762   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:28.298762   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:28.298762   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:28.298762   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:28.303126   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:28.303126   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:28.303126   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:28 GMT
	I0210 12:02:28.303126   11096 round_trippers.go:587]     Audit-Id: da6aec7b-71a0-4510-b9e2-5fe257b5b885
	I0210 12:02:28.303126   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:28.303126   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:28.303126   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:28.303126   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:28.303126   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:28.303126   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:28.798701   11096 type.go:168] "Request Body" body=""
	I0210 12:02:28.798701   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:28.798701   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:28.798701   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:28.798701   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:28.805802   11096 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:02:28.805919   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:28.805919   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:28.805919   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:28.805919   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:28.805919   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:28.805919   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:28 GMT
	I0210 12:02:28.805919   11096 round_trippers.go:587]     Audit-Id: 1b9cacdd-ad24-48b5-84e2-c4b925fb8c18
	I0210 12:02:28.805919   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:28.805919   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:29.298982   11096 type.go:168] "Request Body" body=""
	I0210 12:02:29.299510   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:29.299510   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:29.299510   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:29.299602   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:30.722446   11096 round_trippers.go:581] Response Status: 200 OK in 1422 milliseconds
	I0210 12:02:30.722446   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:30.722446   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:30.722446   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:30.722446   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:30.722446   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:30.722446   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:30.722446   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:30 GMT
	I0210 12:02:30.722446   11096 round_trippers.go:587]     Audit-Id: 3ee02181-afa2-495c-a99a-e9411801ad29
	I0210 12:02:30.722677   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:30.722677   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:30.722921   11096 type.go:168] "Request Body" body=""
	I0210 12:02:30.722981   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:30.722981   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:30.722981   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:30.722981   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:30.726110   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:30.726110   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:30.726179   11096 round_trippers.go:587]     Audit-Id: e94127d1-d064-4a06-bfa8-51bd6335c19a
	I0210 12:02:30.726179   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:30.726179   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:30.726179   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:30.726237   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:30.726310   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:30.726310   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:30 GMT
	I0210 12:02:30.726562   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:30.799026   11096 type.go:168] "Request Body" body=""
	I0210 12:02:30.799585   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:30.799675   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:30.799693   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:30.799693   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:30.803627   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:30.803627   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:30.803627   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:30 GMT
	I0210 12:02:30.803627   11096 round_trippers.go:587]     Audit-Id: 89f693d9-f185-49aa-9cee-f2316ef55aa5
	I0210 12:02:30.803627   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:30.803627   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:30.803627   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:30.803627   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:30.803627   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:30.803962   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:31.299375   11096 type.go:168] "Request Body" body=""
	I0210 12:02:31.299375   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:31.299375   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:31.299375   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:31.299375   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:31.303711   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:31.303787   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:31.303855   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:31.303855   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:31.303855   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:31.303855   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:31.303855   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:31.303855   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:31 GMT
	I0210 12:02:31.303855   11096 round_trippers.go:587]     Audit-Id: 9bc90ed8-9312-46cd-b6f3-b8c33727c380
	I0210 12:02:31.303855   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:31.799504   11096 type.go:168] "Request Body" body=""
	I0210 12:02:31.799504   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:31.799504   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:31.799504   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:31.799504   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:31.809411   11096 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 12:02:31.810162   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:31.810162   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:31.810162   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:31.810162   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:31 GMT
	I0210 12:02:31.810162   11096 round_trippers.go:587]     Audit-Id: b9991284-5c54-42a6-b080-664df2303dcd
	I0210 12:02:31.810162   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:31.810162   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:31.810162   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:31.810494   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:32.298588   11096 type.go:168] "Request Body" body=""
	I0210 12:02:32.299019   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:32.299108   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:32.299108   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:32.299108   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:32.304074   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:32.304074   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:32.304074   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:32.304074   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:32.304074   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:32.304154   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:32.304154   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:32.304154   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:32 GMT
	I0210 12:02:32.304154   11096 round_trippers.go:587]     Audit-Id: 0887674a-d5cf-45ff-ad8a-de00ae1eb5e0
	I0210 12:02:32.304316   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:32.799495   11096 type.go:168] "Request Body" body=""
	I0210 12:02:32.799495   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:32.799495   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:32.799495   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:32.799495   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:32.803896   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:32.803997   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:32.803997   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:32 GMT
	I0210 12:02:32.803997   11096 round_trippers.go:587]     Audit-Id: 7fdfda09-9726-43a5-b79e-6d30c3c6a1f1
	I0210 12:02:32.803997   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:32.803997   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:32.803997   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:32.803997   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:32.803997   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:32.804177   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:32.804283   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:33.298814   11096 type.go:168] "Request Body" body=""
	I0210 12:02:33.298814   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:33.298814   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:33.298814   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:33.298814   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:33.305337   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:02:33.305377   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:33.305377   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:33.305377   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:33.305377   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:33.305377   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:33.305377   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:33.305377   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:33 GMT
	I0210 12:02:33.305454   11096 round_trippers.go:587]     Audit-Id: 2afad043-e248-4071-8324-f8200b3d5880
	I0210 12:02:33.305454   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:33.798785   11096 type.go:168] "Request Body" body=""
	I0210 12:02:33.798785   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:33.798785   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:33.798785   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:33.798785   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:33.803352   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:33.803712   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:33.803712   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:33.803712   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:33 GMT
	I0210 12:02:33.803712   11096 round_trippers.go:587]     Audit-Id: 2edc8433-2d7b-4bad-be31-9e7f548ba809
	I0210 12:02:33.803712   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:33.803712   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:33.803712   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:33.803712   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:33.803852   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:34.298668   11096 type.go:168] "Request Body" body=""
	I0210 12:02:34.298668   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:34.298668   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:34.298668   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:34.298668   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:34.302939   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:34.303318   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:34.303318   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:34.303318   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:34.303318   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:34.303318   11096 round_trippers.go:587]     Content-Length: 2789
	I0210 12:02:34.303318   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:34 GMT
	I0210 12:02:34.303318   11096 round_trippers.go:587]     Audit-Id: 2cfc2478-5b63-4554-83a3-b0bf778052f3
	I0210 12:02:34.303318   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:34.303480   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ce 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 31 38 38 00 42  |b7a9af0e2.6188.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12790 chars]
	 >
	I0210 12:02:34.799186   11096 type.go:168] "Request Body" body=""
	I0210 12:02:34.799186   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:34.799186   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:34.799186   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:34.799186   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:34.803585   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:34.803681   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:34.803681   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:34.803681   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:34.803681   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:34.803681   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:34 GMT
	I0210 12:02:34.803681   11096 round_trippers.go:587]     Audit-Id: aee8f02c-51d1-41d1-b431-7060d4c602fd
	I0210 12:02:34.803681   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:34.803681   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:34.803947   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:35.299158   11096 type.go:168] "Request Body" body=""
	I0210 12:02:35.299311   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:35.299311   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:35.299311   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:35.299311   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:35.303051   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:35.303051   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:35.303051   11096 round_trippers.go:587]     Audit-Id: 62d751a2-e9f2-4f2f-b016-264431b8fcd6
	I0210 12:02:35.303051   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:35.303051   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:35.303051   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:35.303051   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:35.303051   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:35.303051   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:35 GMT
	I0210 12:02:35.303051   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:35.303051   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:35.799378   11096 type.go:168] "Request Body" body=""
	I0210 12:02:35.799378   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:35.799378   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:35.799378   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:35.799378   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:35.804436   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:35.804436   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:35.804506   11096 round_trippers.go:587]     Audit-Id: c1863fbc-0314-45a6-b3e0-82d55d02a3ef
	I0210 12:02:35.804506   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:35.804506   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:35.804506   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:35.804506   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:35.804549   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:35.804549   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:35 GMT
	I0210 12:02:35.804672   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:36.298784   11096 type.go:168] "Request Body" body=""
	I0210 12:02:36.299293   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:36.299357   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:36.299357   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:36.299357   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:36.303143   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:36.303224   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:36.303224   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:36.303224   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:36.303224   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:36.303224   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:36.303224   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:36 GMT
	I0210 12:02:36.303224   11096 round_trippers.go:587]     Audit-Id: f16881e7-71de-4612-a39c-349478b46dfb
	I0210 12:02:36.303224   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:36.303540   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:36.798980   11096 type.go:168] "Request Body" body=""
	I0210 12:02:36.799176   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:36.799176   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:36.799176   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:36.799286   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:36.957106   11096 round_trippers.go:581] Response Status: 200 OK in 157 milliseconds
	I0210 12:02:36.957106   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:36.957106   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:36.957106   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:36.957106   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:36.957106   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:36 GMT
	I0210 12:02:36.957106   11096 round_trippers.go:587]     Audit-Id: d38683fe-2aa6-43d6-82a6-ec711a2f8fce
	I0210 12:02:36.957106   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:36.957106   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:36.957398   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:37.298641   11096 type.go:168] "Request Body" body=""
	I0210 12:02:37.298641   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:37.298641   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:37.298641   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:37.298641   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:37.809751   11096 round_trippers.go:581] Response Status: 200 OK in 511 milliseconds
	I0210 12:02:37.809751   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:37.809828   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:37.809828   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:37.809828   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:37.809828   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:37 GMT
	I0210 12:02:37.809828   11096 round_trippers.go:587]     Audit-Id: d946a85b-5b94-494b-8af5-54043fb0f263
	I0210 12:02:37.809828   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:37.809828   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:37.810011   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:37.810223   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:37.810223   11096 type.go:168] "Request Body" body=""
	I0210 12:02:37.810223   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:37.810223   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:37.810223   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:37.810223   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:37.812878   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:37.812878   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:37.812878   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:37.812878   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:37.812878   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:37.812878   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:37.812878   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:37.812878   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:37 GMT
	I0210 12:02:37.812878   11096 round_trippers.go:587]     Audit-Id: cf87b7fd-163e-4f54-9d06-b07ddb6e47cb
	I0210 12:02:37.813793   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:38.298763   11096 type.go:168] "Request Body" body=""
	I0210 12:02:38.299209   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:38.299264   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:38.299264   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:38.299264   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:38.306022   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:02:38.306022   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:38.306022   11096 round_trippers.go:587]     Audit-Id: feb76114-6124-45b5-9959-d6a393c0dc06
	I0210 12:02:38.306022   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:38.306022   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:38.306022   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:38.306022   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:38.306022   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:38.306022   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:38 GMT
	I0210 12:02:38.306022   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:38.799257   11096 type.go:168] "Request Body" body=""
	I0210 12:02:38.799257   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:38.799257   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:38.799257   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:38.799257   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:39.551001   11096 round_trippers.go:581] Response Status: 200 OK in 751 milliseconds
	I0210 12:02:39.551068   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:39.551068   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:39.551068   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:39.551068   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:39.551068   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:39.551133   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:39.551133   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:39 GMT
	I0210 12:02:39.551133   11096 round_trippers.go:587]     Audit-Id: 52306298-3f39-4680-8294-25c4355b2e37
	I0210 12:02:39.551280   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:39.551416   11096 type.go:168] "Request Body" body=""
	I0210 12:02:39.551487   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:39.551487   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:39.551487   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:39.551487   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:39.557717   11096 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:02:39.557717   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:39.557822   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:39 GMT
	I0210 12:02:39.557822   11096 round_trippers.go:587]     Audit-Id: dba96174-dc27-4d04-acb1-086da05494a3
	I0210 12:02:39.557822   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:39.557822   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:39.557822   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:39.557822   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:39.557822   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:39.557822   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:39.799589   11096 type.go:168] "Request Body" body=""
	I0210 12:02:39.799589   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:39.799589   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:39.799589   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:39.799589   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:39.802583   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:39.803213   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:39.803213   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:39.803213   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:39.803213   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:39.803213   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:39 GMT
	I0210 12:02:39.803213   11096 round_trippers.go:587]     Audit-Id: 151b069a-c54c-43ef-a123-9dba0679a4c2
	I0210 12:02:39.803213   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:39.803213   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:39.803586   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:40.298597   11096 type.go:168] "Request Body" body=""
	I0210 12:02:40.298597   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:40.298597   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:40.298597   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:40.298597   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:40.303308   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:40.303308   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:40.303308   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:40.303308   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:40.303308   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:40 GMT
	I0210 12:02:40.303308   11096 round_trippers.go:587]     Audit-Id: 793b7010-3ca7-4949-94a8-c0a8d3ca59aa
	I0210 12:02:40.303308   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:40.303308   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:40.303308   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:40.303600   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:40.303731   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:40.799362   11096 type.go:168] "Request Body" body=""
	I0210 12:02:40.799362   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:40.799362   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:40.799362   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:40.799362   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:40.803804   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:40.803804   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:40.803804   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:40.803876   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:40.803876   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:40.803876   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:40.803876   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:40.803876   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:40 GMT
	I0210 12:02:40.803876   11096 round_trippers.go:587]     Audit-Id: 08beb580-681b-416b-be2c-005c0d51d60c
	I0210 12:02:40.804186   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:41.299648   11096 type.go:168] "Request Body" body=""
	I0210 12:02:41.299648   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:41.299648   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:41.299648   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:41.299648   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:41.303777   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:41.303853   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:41.303853   11096 round_trippers.go:587]     Audit-Id: 1d64e07c-0e06-4b93-abf9-becdc52bfb7e
	I0210 12:02:41.303853   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:41.303853   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:41.303853   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:41.303853   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:41.303928   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:41.303961   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:41 GMT
	I0210 12:02:41.303961   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:41.799436   11096 type.go:168] "Request Body" body=""
	I0210 12:02:41.799436   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:41.799436   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:41.799436   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:41.799436   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:41.803615   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:41.803615   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:41.803615   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:41.803615   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:41.803615   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:41.803615   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:41 GMT
	I0210 12:02:41.803615   11096 round_trippers.go:587]     Audit-Id: 424faebd-932e-4168-a0a8-4ebca2c15085
	I0210 12:02:41.803615   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:41.803615   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:41.803615   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:42.299633   11096 type.go:168] "Request Body" body=""
	I0210 12:02:42.300050   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:42.300050   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:42.300135   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:42.300135   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:42.357765   11096 round_trippers.go:581] Response Status: 200 OK in 57 milliseconds
	I0210 12:02:42.357765   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:42.357765   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:42.357765   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:42.357765   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:42.357765   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:42 GMT
	I0210 12:02:42.357765   11096 round_trippers.go:587]     Audit-Id: 6b4ac4e1-5daf-49a7-930d-00b45519a9a7
	I0210 12:02:42.357765   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:42.357765   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:42.357765   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:42.357765   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:42.798881   11096 type.go:168] "Request Body" body=""
	I0210 12:02:42.798881   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:42.798881   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:42.798881   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:42.798881   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:42.803741   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:42.803815   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:42.803815   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:42 GMT
	I0210 12:02:42.803815   11096 round_trippers.go:587]     Audit-Id: 55b5841f-ded9-4bea-af70-e4aed383a7d3
	I0210 12:02:42.803815   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:42.803893   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:42.803893   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:42.803893   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:42.803933   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:42.803994   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:43.299416   11096 type.go:168] "Request Body" body=""
	I0210 12:02:43.299416   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:43.299416   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:43.299416   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:43.299416   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:43.304080   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:43.304154   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:43.304154   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:43.304154   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:43 GMT
	I0210 12:02:43.304154   11096 round_trippers.go:587]     Audit-Id: 456be8a7-8bbc-46ed-b683-63f03f087ccd
	I0210 12:02:43.304154   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:43.304154   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:43.304154   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:43.304154   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:43.304333   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:43.798966   11096 type.go:168] "Request Body" body=""
	I0210 12:02:43.798966   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:43.798966   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:43.798966   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:43.798966   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:43.802976   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:43.802976   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:43.802976   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:43 GMT
	I0210 12:02:43.802976   11096 round_trippers.go:587]     Audit-Id: ec54c880-cb9f-4566-85a4-de8212c4c3af
	I0210 12:02:43.802976   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:43.802976   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:43.802976   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:43.802976   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:43.802976   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:43.802976   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:44.299668   11096 type.go:168] "Request Body" body=""
	I0210 12:02:44.299668   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:44.299668   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:44.299668   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:44.299668   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:44.798873   11096 round_trippers.go:581] Response Status: 200 OK in 499 milliseconds
	I0210 12:02:44.798873   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:44.799011   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:44.799011   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:44.799011   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:44.799011   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:44 GMT
	I0210 12:02:44.799011   11096 round_trippers.go:587]     Audit-Id: 9e8fc11c-e808-4a6c-8331-e8559e1857ef
	I0210 12:02:44.799011   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:44.799011   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:44.799198   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:44.799439   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:44.799439   11096 type.go:168] "Request Body" body=""
	I0210 12:02:44.799585   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:44.799585   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:44.799585   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:44.799585   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:44.804318   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:44.804318   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:44.804318   11096 round_trippers.go:587]     Audit-Id: ca0b4c25-252a-4ab5-acf0-b1fbaac0ae7b
	I0210 12:02:44.804318   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:44.804318   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:44.804318   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:44.804318   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:44.804318   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:44.804318   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:44 GMT
	I0210 12:02:44.804318   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:45.299994   11096 type.go:168] "Request Body" body=""
	I0210 12:02:45.299994   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:45.299994   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:45.299994   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:45.299994   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:45.309471   11096 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 12:02:45.309546   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:45.309546   11096 round_trippers.go:587]     Audit-Id: 41d6448a-24a3-4c2f-8ab2-bc924515d466
	I0210 12:02:45.309546   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:45.309546   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:45.309546   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:45.309546   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:45.309546   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:45.309546   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:45 GMT
	I0210 12:02:45.309834   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:45.799470   11096 type.go:168] "Request Body" body=""
	I0210 12:02:45.799544   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:45.799544   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:45.799544   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:45.799657   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:46.011462   11096 round_trippers.go:581] Response Status: 200 OK in 211 milliseconds
	I0210 12:02:46.011573   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:46.011573   11096 round_trippers.go:587]     Audit-Id: 9526efaa-fff5-4ee7-9737-5b2e72b12970
	I0210 12:02:46.011573   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:46.011573   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:46.011679   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:46.011679   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:46.011863   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:46.011863   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:46 GMT
	I0210 12:02:46.012146   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:46.298709   11096 type.go:168] "Request Body" body=""
	I0210 12:02:46.298709   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:46.298709   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:46.298709   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:46.298709   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:46.302804   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:46.302804   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:46.302804   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:46.302804   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:46.302804   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:46 GMT
	I0210 12:02:46.302804   11096 round_trippers.go:587]     Audit-Id: 0ee66deb-8095-4b41-854b-91628ae39444
	I0210 12:02:46.302804   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:46.302804   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:46.302804   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:46.302804   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:46.799609   11096 type.go:168] "Request Body" body=""
	I0210 12:02:46.799609   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:46.799609   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:46.799609   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:46.799609   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:46.855055   11096 round_trippers.go:581] Response Status: 200 OK in 55 milliseconds
	I0210 12:02:46.855170   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:46.855170   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:46.855170   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:46.855170   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:46 GMT
	I0210 12:02:46.855170   11096 round_trippers.go:587]     Audit-Id: 4817f6a8-d7c5-481d-bbe8-956e8517b55e
	I0210 12:02:46.855170   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:46.855170   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:46.855170   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:46.855441   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:46.855689   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:47.300984   11096 type.go:168] "Request Body" body=""
	I0210 12:02:47.301057   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:47.301057   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:47.301057   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:47.301057   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:47.305226   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:47.305226   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:47.305226   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:47.305226   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:47.305226   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:47.305226   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:47 GMT
	I0210 12:02:47.305226   11096 round_trippers.go:587]     Audit-Id: add33af2-7b72-4493-bacc-29fa4cc075ac
	I0210 12:02:47.305226   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:47.305226   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:47.305226   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:47.799095   11096 type.go:168] "Request Body" body=""
	I0210 12:02:47.799095   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:47.799095   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:47.799095   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:47.799095   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:47.803444   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:47.803517   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:47.803517   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:47.803517   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:47.803588   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:47.803588   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:47.803588   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:47 GMT
	I0210 12:02:47.803588   11096 round_trippers.go:587]     Audit-Id: b75fb171-1bf8-40d4-8f75-1e5b57a7842e
	I0210 12:02:47.803588   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:47.803651   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:48.298893   11096 type.go:168] "Request Body" body=""
	I0210 12:02:48.298893   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:48.298893   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:48.298893   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:48.298893   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:49.099744   11096 round_trippers.go:581] Response Status: 200 OK in 800 milliseconds
	I0210 12:02:49.099744   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:49.099744   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:49.099845   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:49.099845   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:49 GMT
	I0210 12:02:49.099845   11096 round_trippers.go:587]     Audit-Id: e4693672-1be9-4c8f-acbc-7b65b71d1cea
	I0210 12:02:49.099845   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:49.099874   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:49.099874   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:49.099905   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:49.099905   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:49.099905   11096 type.go:168] "Request Body" body=""
	I0210 12:02:49.099905   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:49.099905   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:49.099905   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:49.099905   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:49.103085   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:49.103085   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:49.103085   11096 round_trippers.go:587]     Audit-Id: 7ea0ee0d-0bc0-4a4e-b56f-4aa9fb2545d0
	I0210 12:02:49.103085   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:49.103085   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:49.103085   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:49.103085   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:49.103085   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:49.103085   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:49 GMT
	I0210 12:02:49.103085   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:49.298872   11096 type.go:168] "Request Body" body=""
	I0210 12:02:49.298872   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:49.298872   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:49.298872   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:49.298872   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:49.303149   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:49.303149   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:49.303149   11096 round_trippers.go:587]     Audit-Id: 1429d86a-bc47-4710-be79-91529f82a8a5
	I0210 12:02:49.303421   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:49.303421   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:49.303421   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:49.303421   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:49.303421   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:49.303421   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:49 GMT
	I0210 12:02:49.303524   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:49.799584   11096 type.go:168] "Request Body" body=""
	I0210 12:02:49.800213   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:49.800213   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:49.800261   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:49.800261   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:49.804984   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:49.804984   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:49.804984   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:49.804984   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:49.804984   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:49.804984   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:49.804984   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:49.804984   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:49 GMT
	I0210 12:02:49.804984   11096 round_trippers.go:587]     Audit-Id: 2f796626-d0bc-4db0-98e9-c6f9de3fa97e
	I0210 12:02:49.805373   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:50.299110   11096 type.go:168] "Request Body" body=""
	I0210 12:02:50.299110   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:50.299110   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:50.299110   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:50.299110   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:50.304081   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:50.304081   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:50.304081   11096 round_trippers.go:587]     Audit-Id: f6243ed2-1136-41fa-a2e6-089313254d18
	I0210 12:02:50.304203   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:50.304203   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:50.304203   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:50.304203   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:50.304203   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:50.304203   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:50 GMT
	I0210 12:02:50.304466   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:50.799007   11096 type.go:168] "Request Body" body=""
	I0210 12:02:50.799007   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:50.799007   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:50.799007   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:50.799007   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:50.803829   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:50.803829   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:50.803829   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:50.803829   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:50 GMT
	I0210 12:02:50.803829   11096 round_trippers.go:587]     Audit-Id: 6e929bb6-5d96-4725-888a-09bcf7dc0fcc
	I0210 12:02:50.803829   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:50.803829   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:50.803829   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:50.803829   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:50.803829   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:51.299785   11096 type.go:168] "Request Body" body=""
	I0210 12:02:51.300369   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:51.300369   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:51.300369   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:51.300442   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:51.305273   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:51.305273   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:51.305273   11096 round_trippers.go:587]     Audit-Id: 2d92f222-9fd8-4b2b-bf36-b4e3db3bc38e
	I0210 12:02:51.305273   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:51.305273   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:51.305273   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:51.305273   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:51.305273   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:51.305273   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:51 GMT
	I0210 12:02:51.305273   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:51.305882   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:51.798761   11096 type.go:168] "Request Body" body=""
	I0210 12:02:51.798761   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:51.798761   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:51.798761   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:51.798761   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:51.803653   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:51.803653   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:51.803714   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:51.803714   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:51.803714   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:51.803714   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:51.803714   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:51 GMT
	I0210 12:02:51.803714   11096 round_trippers.go:587]     Audit-Id: 1c200fe3-2d3c-4333-8137-099b3d09cb73
	I0210 12:02:51.803714   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:51.803714   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:52.298935   11096 type.go:168] "Request Body" body=""
	I0210 12:02:52.298935   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:52.298935   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:52.298935   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:52.298935   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:52.303665   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:52.303665   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:52.303665   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:52.303665   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:52.303665   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:52.303665   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:52.303665   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:52 GMT
	I0210 12:02:52.303665   11096 round_trippers.go:587]     Audit-Id: 7455c160-1d38-4fed-ab53-64dc83064ca7
	I0210 12:02:52.303665   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:52.304193   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:52.798915   11096 type.go:168] "Request Body" body=""
	I0210 12:02:52.799580   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:52.799580   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:52.799580   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:52.799580   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:52.804239   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:52.804239   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:52.804239   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:52.804239   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:52.804239   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:52 GMT
	I0210 12:02:52.804239   11096 round_trippers.go:587]     Audit-Id: 192fa59a-0154-4310-aebb-ea097249394f
	I0210 12:02:52.804239   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:52.804239   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:52.804239   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:52.804624   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:53.298908   11096 type.go:168] "Request Body" body=""
	I0210 12:02:53.298908   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:53.298908   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:53.298908   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:53.298908   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:53.497976   11096 round_trippers.go:581] Response Status: 200 OK in 199 milliseconds
	I0210 12:02:53.497976   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:53.497976   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:53.497976   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:53.497976   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:53 GMT
	I0210 12:02:53.497976   11096 round_trippers.go:587]     Audit-Id: 8e2be0b9-9331-4c95-a652-a24563e8e2c8
	I0210 12:02:53.497976   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:53.497976   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:53.497976   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:53.498411   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:53.498563   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:53.799231   11096 type.go:168] "Request Body" body=""
	I0210 12:02:53.799678   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:53.799678   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:53.799678   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:53.799678   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:53.803649   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:53.803649   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:53.803649   11096 round_trippers.go:587]     Audit-Id: aaf33b9a-0534-4da4-9a87-db6c67b75a2f
	I0210 12:02:53.803735   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:53.803735   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:53.803735   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:53.803735   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:53.803735   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:53.803735   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:53 GMT
	I0210 12:02:53.803867   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:54.299169   11096 type.go:168] "Request Body" body=""
	I0210 12:02:54.299691   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:54.299803   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:54.299803   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:54.299803   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:54.304135   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:54.304202   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:54.304202   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:54.304202   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:54.304202   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:54.304202   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:54.304202   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:54 GMT
	I0210 12:02:54.304202   11096 round_trippers.go:587]     Audit-Id: a0be9e0f-1a90-4b64-ad99-3b6d36404516
	I0210 12:02:54.304202   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:54.304450   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:54.799457   11096 type.go:168] "Request Body" body=""
	I0210 12:02:54.799457   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:54.799457   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:54.799457   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:54.799457   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:54.803921   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:54.804059   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:54.804059   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:54.804059   11096 round_trippers.go:587]     Content-Length: 3090
	I0210 12:02:54.804059   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:54 GMT
	I0210 12:02:54.804059   11096 round_trippers.go:587]     Audit-Id: b2675db5-9c29-47ee-8214-c6c9b5882603
	I0210 12:02:54.804059   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:54.804059   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:54.804059   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:54.804330   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fb 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 32 37 38 00 42  |b7a9af0e2.6278.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14288 chars]
	 >
	I0210 12:02:55.299450   11096 type.go:168] "Request Body" body=""
	I0210 12:02:55.299582   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:55.299612   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:55.299612   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:55.299612   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:55.565681   11096 round_trippers.go:581] Response Status: 200 OK in 266 milliseconds
	I0210 12:02:55.565778   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:55.565778   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:55 GMT
	I0210 12:02:55.565852   11096 round_trippers.go:587]     Audit-Id: 621f1157-f313-40e9-94bb-73b4d10c72c8
	I0210 12:02:55.565852   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:55.565852   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:55.565852   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:55.565852   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:55.565852   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:55.566012   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:55.566181   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:55.799144   11096 type.go:168] "Request Body" body=""
	I0210 12:02:55.799712   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:55.799712   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:55.799712   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:55.799712   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:55.804171   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:55.804171   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:55.804171   11096 round_trippers.go:587]     Audit-Id: e973d7b9-a935-41f6-8ef6-78fb4bec7ad1
	I0210 12:02:55.804289   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:55.804289   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:55.804289   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:55.804289   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:55.804289   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:55.804289   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:55 GMT
	I0210 12:02:55.804687   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:56.298807   11096 type.go:168] "Request Body" body=""
	I0210 12:02:56.298807   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:56.298807   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:56.298807   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:56.298807   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:56.303406   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:56.303406   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:56.303406   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:56.303406   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:56.303406   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:56.303406   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:56.303406   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:56 GMT
	I0210 12:02:56.303406   11096 round_trippers.go:587]     Audit-Id: b54c2b1d-38d1-495d-89e1-84c78236e122
	I0210 12:02:56.303406   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:56.303406   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:56.799064   11096 type.go:168] "Request Body" body=""
	I0210 12:02:56.799064   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:56.799064   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:56.799064   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:56.799064   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:56.947842   11096 round_trippers.go:581] Response Status: 200 OK in 148 milliseconds
	I0210 12:02:56.947842   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:56.947842   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:56.947842   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:56.947842   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:56 GMT
	I0210 12:02:56.947842   11096 round_trippers.go:587]     Audit-Id: 4162c3d2-0211-4983-b155-a9ebd2ee3bbd
	I0210 12:02:56.947842   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:56.947842   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:56.947842   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:56.948220   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:57.299673   11096 type.go:168] "Request Body" body=""
	I0210 12:02:57.299673   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:57.299673   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:57.299673   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:57.299673   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:57.304009   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:57.304009   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:57.304097   11096 round_trippers.go:587]     Audit-Id: defb8752-d825-42b7-8b66-0992ae761c20
	I0210 12:02:57.304097   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:57.304097   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:57.304097   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:57.304097   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:57.304097   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:57.304097   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:57 GMT
	I0210 12:02:57.304257   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:57.800449   11096 type.go:168] "Request Body" body=""
	I0210 12:02:57.800591   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:57.800591   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:57.800591   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:57.800591   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:57.807705   11096 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:02:57.807765   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:57.807765   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:57.807765   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:57.807765   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:57.807765   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:57 GMT
	I0210 12:02:57.807765   11096 round_trippers.go:587]     Audit-Id: 61aebddb-bcea-4602-a13b-0212622d719f
	I0210 12:02:57.807765   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:57.807765   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:57.807765   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:57.807765   11096 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:02:58.299923   11096 type.go:168] "Request Body" body=""
	I0210 12:02:58.300544   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:58.300622   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:58.300622   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:58.300622   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:58.304284   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:58.304284   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:58.304284   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:58.304284   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:58.304284   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:58.304284   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:58.304284   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:58.304284   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:58 GMT
	I0210 12:02:58.304284   11096 round_trippers.go:587]     Audit-Id: 13d146b9-366e-43ab-819f-af93fa817bc8
	I0210 12:02:58.304587   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:58.798954   11096 type.go:168] "Request Body" body=""
	I0210 12:02:58.798954   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:58.798954   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:58.798954   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:58.798954   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:58.803272   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:58.803272   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:58.803272   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:58 GMT
	I0210 12:02:58.803272   11096 round_trippers.go:587]     Audit-Id: 9a1b868c-834b-46ed-9aa1-e5cdb95e58bb
	I0210 12:02:58.803272   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:58.803272   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:58.803272   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:58.803272   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:58.803272   11096 round_trippers.go:587]     Content-Length: 3512
	I0210 12:02:58.803272   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a1 1b 0a 87 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 35 35 38 00 42  |b7a9af0e2.6558.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 16348 chars]
	 >
	I0210 12:02:59.299907   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.300292   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:02:59.300292   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.300292   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.300292   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.304196   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:59.304196   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.304196   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.304196   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.304196   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.304196   11096 round_trippers.go:587]     Content-Length: 3390
	I0210 12:02:59.304196   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.304196   11096 round_trippers.go:587]     Audit-Id: c060f149-dc98-4cff-8de6-81393a3ca8c9
	I0210 12:02:59.304196   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.304721   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a7 1a 0a bd 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 36 32 38 00 42  |b7a9af0e2.6628.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 15722 chars]
	 >
	I0210 12:02:59.304836   11096 node_ready.go:49] node "multinode-032400-m02" has status "Ready":"True"
	I0210 12:02:59.304943   11096 node_ready.go:38] duration metric: took 34.0062212s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:02:59.304943   11096 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:02:59.305076   11096 type.go:204] "Request Body" body=""
	I0210 12:02:59.305076   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods
	I0210 12:02:59.305076   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.305076   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.305183   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.308355   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:59.309152   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.309152   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.309152   11096 round_trippers.go:587]     Audit-Id: 0f7f15db-5f89-43ae-a23e-c896ea295144
	I0210 12:02:59.309152   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.309152   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.309152   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.309152   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.311643   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 f5 93 03 0a  09 0a 00 12 03 36 36 32  |ist..........662|
		00000020  1a 00 12 d2 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 77 38 72  |s-668d6bf9bc-w8r|
		00000040  72 39 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |r9..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  65 34 35 61 33 37 62 66  |stem".*$e45a37bf|
		00000070  2d 65 37 64 61 2d 34 31  32 39 2d 62 62 37 65 2d  |-e7da-4129-bb7e-|
		00000080  38 62 65 37 64 62 65 39  33 65 30 39 32 03 34 35  |8be7dbe93e092.45|
		00000090  33 38 00 42 08 08 92 d4  a7 bd 06 10 00 5a 13 0a  |38.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 254383 chars]
	 >
	I0210 12:02:59.312147   11096 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.312288   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.312364   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:02:59.312364   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.312364   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.312426   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.317324   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:59.317361   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.317361   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.317361   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.317361   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.317361   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.317361   11096 round_trippers.go:587]     Audit-Id: 00773e40-0231-47a9-a6b5-f7d75456901d
	I0210 12:02:59.317361   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.317731   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d2 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 03 34 35 33 38 00  |7dbe93e092.4538.|
		00000080  42 08 08 92 d4 a7 bd 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24169 chars]
	 >
	I0210 12:02:59.317783   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.317783   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.317783   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.317783   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.317783   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.321010   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:59.321010   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.321010   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.321010   11096 round_trippers.go:587]     Audit-Id: ee8872c7-4050-4160-a711-2fd79c56744b
	I0210 12:02:59.321010   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.321010   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.321010   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.321010   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.321010   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:02:59.321617   11096 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 12:02:59.321617   11096 pod_ready.go:82] duration metric: took 9.3986ms for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.321617   11096 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.321617   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.321785   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:02:59.321785   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.321830   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.321830   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.323997   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:59.324655   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.324655   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.324655   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.324731   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.324731   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.324731   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.324731   11096 round_trippers.go:587]     Audit-Id: 59ef6fb2-38ee-4b1e-9d06-11defcef03ad
	I0210 12:02:59.324975   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ab 2b 0a 9e 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 33  |kube-system".*$3|
		00000040  34 64 62 31 34 36 63 2d  65 30 39 64 2d 34 39 35  |4db146c-e09d-495|
		00000050  39 2d 38 33 32 35 2d 64  34 34 35 33 64 66 63 66  |9-8325-d4453dfcf|
		00000060  64 36 32 32 03 34 30 31  38 00 42 08 08 8b d4 a7  |d622.4018.B.....|
		00000070  bd 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4f  |.control-planebO|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26532 chars]
	 >
	I0210 12:02:59.324975   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.324975   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.324975   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.324975   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.324975   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.326581   11096 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:02:59.326581   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.326581   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.326581   11096 round_trippers.go:587]     Audit-Id: 7cfeedd2-222d-4138-aaaf-7a6af59e162a
	I0210 12:02:59.326581   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.326581   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.326581   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.326581   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.327588   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:02:59.327588   11096 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:02:59.327588   11096 pod_ready.go:82] duration metric: took 5.9711ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.327588   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.327588   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.327588   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:02:59.327588   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.327588   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.327588   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.330501   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:59.330501   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.330549   11096 round_trippers.go:587]     Audit-Id: f9f3afcb-3d48-4b41-a008-76179c0a2197
	I0210 12:02:59.330549   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.330549   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.330549   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.330549   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.330549   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.331015   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  8f 34 0a ae 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.4.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 37 61 33 35 34 37 32  |ystem".*$7a35472|
		00000050  64 2d 64 37 63 30 2d 34  63 37 64 2d 61 35 62 31  |d-d7c0-4c7d-a5b1|
		00000060  2d 65 30 39 34 33 37 30  61 66 31 63 32 32 03 33  |-e094370af1c22.3|
		00000070  39 38 38 00 42 08 08 88  d4 a7 bd 06 10 00 5a 1b  |988.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 56 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebV.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 32066 chars]
	 >
	I0210 12:02:59.331015   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.331015   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.331015   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.331015   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.331015   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.333616   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:59.333616   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.333616   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.333616   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.333616   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.333616   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.333616   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.333616   11096 round_trippers.go:587]     Audit-Id: 2e36c73f-d78a-4880-979f-66c5c19e932c
	I0210 12:02:59.334599   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:02:59.334599   11096 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:02:59.334599   11096 pod_ready.go:82] duration metric: took 7.011ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.334599   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.334599   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.334599   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:02:59.334599   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.334599   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.334599   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.337196   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:59.337196   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.337196   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.337196   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.337196   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.337196   11096 round_trippers.go:587]     Audit-Id: 9789609a-0a67-483b-b27d-d9f85899715a
	I0210 12:02:59.337196   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.337196   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.337196   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f0 30 0a 9a 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 03  33 39 30 38 00 42 08 08  |9fb4412.3908.B..|
		00000080  8b d4 a7 bd 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 30013 chars]
	 >
	I0210 12:02:59.337196   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.337196   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.337196   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.337196   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.337196   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.340637   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:02:59.340637   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.340637   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.340637   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.340637   11096 round_trippers.go:587]     Audit-Id: 97503b00-6aee-452a-bc8b-b8a4d8a304c4
	I0210 12:02:59.340637   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.340637   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.340637   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.340637   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:02:59.340637   11096 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:02:59.340637   11096 pod_ready.go:82] duration metric: took 6.0384ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.341973   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.341973   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.500711   11096 request.go:661] Waited for 158.7369ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:02:59.500711   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:02:59.500711   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.500711   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.500711   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.505039   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:59.505039   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.505039   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.505039   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.505039   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.505148   11096 round_trippers.go:587]     Audit-Id: b5d41a82-91e3-4fab-ac74-4a55a7dfda9d
	I0210 12:02:59.505148   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.505148   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.505440   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a2 25 0a c0 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 03 34 30 36 38 00  |e42713cf92.4068.|
		00000070  42 08 08 92 d4 a7 bd 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  35 36 36 64 37 62 39 66  |n-hash..566d7b9f|
		000000a0  38 35 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |85Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22668 chars]
	 >
	I0210 12:02:59.505534   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.700464   11096 request.go:661] Waited for 194.9276ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.700906   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:02:59.700906   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.700906   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.700906   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.705498   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:02:59.705498   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.705498   11096 round_trippers.go:587]     Audit-Id: b177b052-d559-49ab-86a6-8b2e8e06d4a0
	I0210 12:02:59.705498   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.705498   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.705498   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.705498   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.705498   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.705871   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:02:59.706058   11096 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 12:02:59.706058   11096 pod_ready.go:82] duration metric: took 364.0809ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.706058   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:02:59.706220   11096 type.go:168] "Request Body" body=""
	I0210 12:02:59.900298   11096 request.go:661] Waited for 194.0757ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:02:59.900298   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:02:59.900298   11096 round_trippers.go:476] Request Headers:
	I0210 12:02:59.900298   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:02:59.900298   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:02:59.902996   11096 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:02:59.902996   11096 round_trippers.go:584] Response Headers:
	I0210 12:02:59.903803   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:02:59.903803   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:02:59.903803   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:02:59.903803   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:02:59 GMT
	I0210 12:02:59.903803   11096 round_trippers.go:587]     Audit-Id: 2042b6cb-519f-4f90-aa0e-c9e6786a238b
	I0210 12:02:59.903803   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:02:59.904138   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 03 36 33 35 38 00  |0d435af832.6358.|
		00000070  42 08 08 d0 d5 a7 bd 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  35 36 36 64 37 62 39 66  |n-hash..566d7b9f|
		000000a0  38 35 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |85Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22671 chars]
	 >
	I0210 12:02:59.904423   11096 type.go:168] "Request Body" body=""
	I0210 12:03:00.100127   11096 request.go:661] Waited for 195.6561ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:03:00.100127   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:03:00.100127   11096 round_trippers.go:476] Request Headers:
	I0210 12:03:00.100127   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:03:00.100127   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:03:00.104770   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:03:00.104770   11096 round_trippers.go:584] Response Headers:
	I0210 12:03:00.104770   11096 round_trippers.go:587]     Content-Length: 3390
	I0210 12:03:00.104850   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:03:00 GMT
	I0210 12:03:00.104850   11096 round_trippers.go:587]     Audit-Id: dc09ff91-3440-476b-b754-bbe417267c95
	I0210 12:03:00.104850   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:03:00.104850   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:03:00.104850   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:03:00.104850   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:03:00.104951   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a7 1a 0a bd 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 03 36 36 32 38 00 42  |b7a9af0e2.6628.B|
		00000060  08 08 d0 d5 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 15722 chars]
	 >
	I0210 12:03:00.105093   11096 pod_ready.go:93] pod "kube-proxy-xltxj" in "kube-system" namespace has status "Ready":"True"
	I0210 12:03:00.105176   11096 pod_ready.go:82] duration metric: took 399.1137ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:03:00.105193   11096 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:03:00.105262   11096 type.go:168] "Request Body" body=""
	I0210 12:03:00.300508   11096 request.go:661] Waited for 195.2445ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:03:00.300508   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:03:00.300508   11096 round_trippers.go:476] Request Headers:
	I0210 12:03:00.300508   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:03:00.300508   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:03:00.304448   11096 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:03:00.304448   11096 round_trippers.go:584] Response Headers:
	I0210 12:03:00.304448   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:03:00.304448   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:03:00 GMT
	I0210 12:03:00.304448   11096 round_trippers.go:587]     Audit-Id: 261dd670-4d44-4245-98d4-b6de4fc67c76
	I0210 12:03:00.304448   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:03:00.304448   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:03:00.304448   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:03:00.304448   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  fb 22 0a 82 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 03 33  |-18dafc6e44802.3|
		00000070  33 34 38 00 42 08 08 88  d4 a7 bd 06 10 00 5a 1b  |348.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21239 chars]
	 >
	I0210 12:03:00.305062   11096 type.go:168] "Request Body" body=""
	I0210 12:03:00.500600   11096 request.go:661] Waited for 195.536ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:03:00.500600   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes/multinode-032400
	I0210 12:03:00.500600   11096 round_trippers.go:476] Request Headers:
	I0210 12:03:00.500600   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:03:00.500600   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:03:00.504820   11096 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:03:00.504820   11096 round_trippers.go:584] Response Headers:
	I0210 12:03:00.504820   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:03:00.504820   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:03:00.504820   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:03:00 GMT
	I0210 12:03:00.504820   11096 round_trippers.go:587]     Audit-Id: 8529755e-72e4-483a-91d4-de78783f995c
	I0210 12:03:00.504820   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:03:00.504820   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:03:00.505350   11096 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d8 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 03 34 33  32 38 00 42 08 08 86 d4  |1b262.4328.B....|
		00000060  a7 bd 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21016 chars]
	 >
	I0210 12:03:00.505586   11096 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:03:00.505586   11096 pod_ready.go:82] duration metric: took 400.3885ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:03:00.505655   11096 pod_ready.go:39] duration metric: took 1.2006979s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:03:00.505655   11096 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:03:00.513954   11096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:03:00.539654   11096 system_svc.go:56] duration metric: took 33.9994ms WaitForService to wait for kubelet
	I0210 12:03:00.539654   11096 kubeadm.go:582] duration metric: took 35.5110268s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:03:00.539654   11096 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:03:00.539654   11096 type.go:204] "Request Body" body=""
	I0210 12:03:00.701204   11096 request.go:661] Waited for 161.5474ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.136.201:8443/api/v1/nodes
	I0210 12:03:00.701204   11096 round_trippers.go:470] GET https://172.29.136.201:8443/api/v1/nodes
	I0210 12:03:00.701204   11096 round_trippers.go:476] Request Headers:
	I0210 12:03:00.701204   11096 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:03:00.701204   11096 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:03:00.706941   11096 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:03:00.707036   11096 round_trippers.go:584] Response Headers:
	I0210 12:03:00.707036   11096 round_trippers.go:587]     Audit-Id: cfbd5f27-61a4-488a-a46e-7dace2cb5ada
	I0210 12:03:00.707036   11096 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:03:00.707036   11096 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:03:00.707036   11096 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:03:00.707036   11096 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:03:00.707036   11096 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:03:00 GMT
	I0210 12:03:00.707573   11096 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 90 3d 0a  09 0a 00 12 03 36 36 33  |List..=......663|
		00000020  1a 00 12 d8 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 30 33 32 34  30 30 12 00 1a 00 22 00  |ode-032400....".|
		00000040  2a 24 61 30 38 30 31 35  65 66 2d 65 35 32 30 2d  |*$a08015ef-e520-|
		00000050  34 31 63 62 2d 61 65 61  30 2d 31 64 39 63 38 31  |41cb-aea0-1d9c81|
		00000060  65 30 31 62 32 36 32 03  34 33 32 38 00 42 08 08  |e01b262.4328.B..|
		00000070  86 d4 a7 bd 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 37760 chars]
	 >
	I0210 12:03:00.707889   11096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:03:00.707889   11096 node_conditions.go:123] node cpu capacity is 2
	I0210 12:03:00.707958   11096 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:03:00.707958   11096 node_conditions.go:123] node cpu capacity is 2
	I0210 12:03:00.707958   11096 node_conditions.go:105] duration metric: took 168.3019ms to run NodePressure ...
	I0210 12:03:00.707958   11096 start.go:241] waiting for startup goroutines ...
	I0210 12:03:00.708035   11096 start.go:255] writing updated cluster config ...
	I0210 12:03:00.717130   11096 ssh_runner.go:195] Run: rm -f paused
	I0210 12:03:00.853839   11096 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0210 12:03:00.863811   11096 out.go:177] * Done! kubectl is now configured to use "multinode-032400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.381841897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.388661315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.388853615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.388877615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.388996816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 cri-dockerd[1341]: time="2025-02-10T11:59:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac/resolv.conf as [nameserver 172.29.128.1]"
	Feb 10 11:59:44 multinode-032400 cri-dockerd[1341]: time="2025-02-10T11:59:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b/resolv.conf as [nameserver 172.29.128.1]"
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.717699564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.726077086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.726757987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.727153688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.886783609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.887061010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.887081010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 11:59:44 multinode-032400 dockerd[1444]: time="2025-02-10T11:59:44.887447311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:03:24 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:24.511664183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:03:24 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:24.511746683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:03:24 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:24.511765483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:03:24 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:24.512063184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:03:24 multinode-032400 cri-dockerd[1341]: time="2025-02-10T12:03:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 10 12:03:26 multinode-032400 cri-dockerd[1341]: time="2025-02-10T12:03:26Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Feb 10 12:03:26 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:26.516711018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:03:26 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:26.517100123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:03:26 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:26.517120823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:03:26 multinode-032400 dockerd[1444]: time="2025-02-10T12:03:26.519400549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   47 seconds ago      Running             busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	c5b854dbb9121       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	182c8395f5e17       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   4ccc0a4e7b5c7       storage-provisioner
	4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              4 minutes ago       Running             kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	148309413de8d       e29f9c7391fd9                                                                                         4 minutes ago       Running             kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	9f1c4e9b3353b       95c0bda56fc4d                                                                                         5 minutes ago       Running             kube-apiserver            0                   8c55184f16ccb       kube-apiserver-multinode-032400
	adf520f9b9d78       2b0d6572d062c                                                                                         5 minutes ago       Running             kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	3ae31c3c37c9f       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   b2de8e426f22f       etcd-multinode-032400
	9408ce83d7d38       019ee182b58e2                                                                                         5 minutes ago       Running             kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	
	
	==> coredns [c5b854dbb912] <==
	[INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	[INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	[INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	[INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	[INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	[INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	[INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	[INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	[INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	[INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	[INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	[INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	[INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	[INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	[INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	[INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	[INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	[INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	[INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	[INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	[INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	[INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	[INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	[INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	[INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	
	
	==> describe nodes <==
	Name:               multinode-032400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-032400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=multinode-032400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-032400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:04:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:03:43 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:03:43 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:03:43 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:03:43 +0000   Mon, 10 Feb 2025 11:59:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.136.201
	  Hostname:    multinode-032400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 70cf9ab1d7114c4f9fb1512b3bc54668
	  System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	  Boot ID:                    83bf8a96-8cb3-4ce7-bd87-53f6ee9ae57c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m59s
	  kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m6s
	  kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m59s
	  kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m53s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m19s (x8 over 5m19s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m19s (x8 over 5m19s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x7 over 5m19s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m6s (x2 over 5m6s)    kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m6s (x2 over 5m6s)    kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m6s (x2 over 5m6s)    kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m1s                   node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	  Normal  NodeReady                4m32s                  kubelet          Node multinode-032400 status is now: NodeReady
	
	
	Name:               multinode-032400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-032400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=multinode-032400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-032400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:04:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:03:56 +0000   Mon, 10 Feb 2025 12:02:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:03:56 +0000   Mon, 10 Feb 2025 12:02:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:03:56 +0000   Mon, 10 Feb 2025 12:02:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:03:56 +0000   Mon, 10 Feb 2025 12:02:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.143.51
	  Hostname:    multinode-032400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	  System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	  Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      109s
	  kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  109s (x2 over 109s)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x2 over 109s)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x2 over 109s)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           106s                 node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	  Normal  NodeReady                75s                  kubelet          Node multinode-032400-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +41.805953] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.168080] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[Feb10 11:58] systemd-fstab-generator[1005]: Ignoring "noauto" option for root device
	[  +0.097306] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.611158] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.200182] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.230330] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +3.258020] systemd-fstab-generator[1294]: Ignoring "noauto" option for root device
	[  +0.192666] systemd-fstab-generator[1306]: Ignoring "noauto" option for root device
	[  +0.197230] systemd-fstab-generator[1318]: Ignoring "noauto" option for root device
	[  +0.263375] systemd-fstab-generator[1333]: Ignoring "noauto" option for root device
	[  +0.099622] kauditd_printk_skb: 184 callbacks suppressed
	[ +17.785233] systemd-fstab-generator[1430]: Ignoring "noauto" option for root device
	[  +0.110270] kauditd_printk_skb: 12 callbacks suppressed
	[  +4.140990] systemd-fstab-generator[1690]: Ignoring "noauto" option for root device
	[  +5.693613] systemd-fstab-generator[1836]: Ignoring "noauto" option for root device
	[  +0.100006] kauditd_printk_skb: 74 callbacks suppressed
	[Feb10 11:59] systemd-fstab-generator[2258]: Ignoring "noauto" option for root device
	[  +0.141834] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.846939] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.154686] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.148199] kauditd_printk_skb: 22 callbacks suppressed
	[ +10.624479] kauditd_printk_skb: 19 callbacks suppressed
	[Feb10 12:03] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [3ae31c3c37c9] <==
	{"level":"warn","ts":"2025-02-10T12:02:46.023912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.568662ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-032400-m02\" limit:1 ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2025-02-10T12:02:46.024379Z","caller":"traceutil/trace.go:171","msg":"trace[943305533] range","detail":"{range_begin:/registry/minions/multinode-032400-m02; range_end:; response_count:1; response_revision:643; }","duration":"207.041063ms","start":"2025-02-10T12:02:45.817323Z","end":"2025-02-10T12:02:46.024364Z","steps":["trace[943305533] 'range keys from in-memory index tree'  (duration: 206.310061ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:02:48.817325Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":18343043564643760778,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-02-10T12:02:49.107943Z","caller":"traceutil/trace.go:171","msg":"trace[1281302890] linearizableReadLoop","detail":"{readStateIndex:710; appliedIndex:709; }","duration":"790.982619ms","start":"2025-02-10T12:02:48.316940Z","end":"2025-02-10T12:02:49.107923Z","steps":["trace[1281302890] 'read index received'  (duration: 752.178095ms)","trace[1281302890] 'applied index is now lower than readState.Index'  (duration: 38.803624ms)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:02:49.108354Z","caller":"traceutil/trace.go:171","msg":"trace[1297181071] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"923.781642ms","start":"2025-02-10T12:02:48.184560Z","end":"2025-02-10T12:02:49.108341Z","steps":["trace[1297181071] 'process raft request'  (duration: 884.614718ms)","trace[1297181071] 'compare'  (duration: 38.504922ms)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T12:02:49.108547Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:02:48.184538Z","time spent":"923.962543ms","remote":"127.0.0.1:33666","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4683,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:604 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4635 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2025-02-10T12:02:49.108696Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"234.794647ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-02-10T12:02:49.109408Z","caller":"traceutil/trace.go:171","msg":"trace[2122858544] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:649; }","duration":"235.73765ms","start":"2025-02-10T12:02:48.873658Z","end":"2025-02-10T12:02:49.109396Z","steps":["trace[2122858544] 'agreement among raft nodes before linearized reading'  (duration: 234.778147ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:02:49.108965Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"792.096322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-032400-m02\" limit:1 ","response":"range_response_count:1 size:3148"}
	{"level":"warn","ts":"2025-02-10T12:02:49.109018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.140665ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2025-02-10T12:02:49.111405Z","caller":"traceutil/trace.go:171","msg":"trace[735160839] range","detail":"{range_begin:/registry/minions/multinode-032400-m02; range_end:; response_count:1; response_revision:649; }","duration":"794.55953ms","start":"2025-02-10T12:02:48.316831Z","end":"2025-02-10T12:02:49.111391Z","steps":["trace[735160839] 'agreement among raft nodes before linearized reading'  (duration: 791.993322ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:02:49.111705Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:02:48.316780Z","time spent":"794.906031ms","remote":"127.0.0.1:33392","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":3172,"request content":"key:\"/registry/minions/multinode-032400-m02\" limit:1 "}
	{"level":"info","ts":"2025-02-10T12:02:49.111438Z","caller":"traceutil/trace.go:171","msg":"trace[600359287] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:649; }","duration":"211.578673ms","start":"2025-02-10T12:02:48.899847Z","end":"2025-02-10T12:02:49.111425Z","steps":["trace[600359287] 'agreement among raft nodes before linearized reading'  (duration: 209.133065ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:02:53.509356Z","caller":"traceutil/trace.go:171","msg":"trace[1358603182] linearizableReadLoop","detail":"{readStateIndex:716; appliedIndex:715; }","duration":"192.623207ms","start":"2025-02-10T12:02:53.316714Z","end":"2025-02-10T12:02:53.509337Z","steps":["trace[1358603182] 'read index received'  (duration: 192.373906ms)","trace[1358603182] 'applied index is now lower than readState.Index'  (duration: 248.501µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T12:02:53.509731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"192.991608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-032400-m02\" limit:1 ","response":"range_response_count:1 size:3148"}
	{"level":"info","ts":"2025-02-10T12:02:53.509900Z","caller":"traceutil/trace.go:171","msg":"trace[209805201] range","detail":"{range_begin:/registry/minions/multinode-032400-m02; range_end:; response_count:1; response_revision:654; }","duration":"193.25501ms","start":"2025-02-10T12:02:53.316610Z","end":"2025-02-10T12:02:53.509866Z","steps":["trace[209805201] 'agreement among raft nodes before linearized reading'  (duration: 192.890808ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:02:53.510299Z","caller":"traceutil/trace.go:171","msg":"trace[1067631052] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"362.973244ms","start":"2025-02-10T12:02:53.147311Z","end":"2025-02-10T12:02:53.510284Z","steps":["trace[1067631052] 'process raft request'  (duration: 361.79894ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:02:53.510893Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-02-10T12:02:53.147291Z","time spent":"363.541146ms","remote":"127.0.0.1:33380","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1101,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:651 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1028 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-02-10T12:02:55.576421Z","caller":"traceutil/trace.go:171","msg":"trace[1508344220] linearizableReadLoop","detail":"{readStateIndex:717; appliedIndex:716; }","duration":"259.784515ms","start":"2025-02-10T12:02:55.316616Z","end":"2025-02-10T12:02:55.576401Z","steps":["trace[1508344220] 'read index received'  (duration: 259.620315ms)","trace[1508344220] 'applied index is now lower than readState.Index'  (duration: 163.6µs)"],"step_count":2}
	{"level":"info","ts":"2025-02-10T12:02:55.577094Z","caller":"traceutil/trace.go:171","msg":"trace[434273667] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"293.689022ms","start":"2025-02-10T12:02:55.283389Z","end":"2025-02-10T12:02:55.577079Z","steps":["trace[434273667] 'process raft request'  (duration: 292.90762ms)"],"step_count":1}
	{"level":"warn","ts":"2025-02-10T12:02:55.577516Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.95292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-032400-m02\" limit:1 ","response":"range_response_count:1 size:3570"}
	{"level":"info","ts":"2025-02-10T12:02:55.578006Z","caller":"traceutil/trace.go:171","msg":"trace[716825573] range","detail":"{range_begin:/registry/minions/multinode-032400-m02; range_end:; response_count:1; response_revision:655; }","duration":"261.481521ms","start":"2025-02-10T12:02:55.316509Z","end":"2025-02-10T12:02:55.577991Z","steps":["trace[716825573] 'agreement among raft nodes before linearized reading'  (duration: 260.850419ms)"],"step_count":1}
	{"level":"info","ts":"2025-02-10T12:02:56.958696Z","caller":"traceutil/trace.go:171","msg":"trace[414453542] linearizableReadLoop","detail":"{readStateIndex:720; appliedIndex:719; }","duration":"141.769145ms","start":"2025-02-10T12:02:56.816904Z","end":"2025-02-10T12:02:56.958674Z","steps":["trace[414453542] 'read index received'  (duration: 141.419244ms)","trace[414453542] 'applied index is now lower than readState.Index'  (duration: 348.601µs)"],"step_count":2}
	{"level":"warn","ts":"2025-02-10T12:02:56.959549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.760847ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-032400-m02\" limit:1 ","response":"range_response_count:1 size:3570"}
	{"level":"info","ts":"2025-02-10T12:02:56.959606Z","caller":"traceutil/trace.go:171","msg":"trace[1019635918] range","detail":"{range_begin:/registry/minions/multinode-032400-m02; range_end:; response_count:1; response_revision:657; }","duration":"142.832448ms","start":"2025-02-10T12:02:56.816764Z","end":"2025-02-10T12:02:56.959596Z","steps":["trace[1019635918] 'agreement among raft nodes before linearized reading'  (duration: 142.722147ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:04:13 up 7 min,  0 users,  load average: 1.12, 0.88, 0.43
	Linux multinode-032400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4439940fa5f4] <==
	I0210 12:03:10.447317       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:03:20.445882       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:03:20.446032       1 main.go:301] handling current node
	I0210 12:03:20.446053       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:03:20.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:03:30.446418       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:03:30.446473       1 main.go:301] handling current node
	I0210 12:03:30.446493       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:03:30.446500       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:03:40.455337       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:03:40.455477       1 main.go:301] handling current node
	I0210 12:03:40.455502       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:03:40.455511       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:03:50.446521       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:03:50.446569       1 main.go:301] handling current node
	I0210 12:03:50.446589       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:03:50.446595       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:04:00.446288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:04:00.446501       1 main.go:301] handling current node
	I0210 12:04:00.446551       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:04:00.446573       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:04:10.455081       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:04:10.455208       1 main.go:301] handling current node
	I0210 12:04:10.455232       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:04:10.455242       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9f1c4e9b3353] <==
	I0210 11:59:03.760614       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0210 11:59:04.518928       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0210 11:59:04.519154       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 11:59:06.493020       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 11:59:06.575211       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 11:59:06.736901       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0210 11:59:06.759903       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.136.201]
	I0210 11:59:06.761168       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 11:59:06.772116       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 11:59:07.543833       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 11:59:07.587399       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 11:59:07.672977       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0210 11:59:07.717987       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 11:59:12.931344       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 11:59:13.985503       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0210 12:03:29.545301       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57503: use of closed network connection
	E0210 12:03:30.027356       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57505: use of closed network connection
	E0210 12:03:30.546767       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57507: use of closed network connection
	E0210 12:03:30.985138       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57509: use of closed network connection
	E0210 12:03:31.442493       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57511: use of closed network connection
	E0210 12:03:31.910668       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57513: use of closed network connection
	E0210 12:03:32.735888       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57516: use of closed network connection
	E0210 12:03:43.218099       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57518: use of closed network connection
	E0210 12:03:43.673611       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57521: use of closed network connection
	E0210 12:03:54.127220       1 conn.go:339] Error on socket receive: read tcp 172.29.136.201:8443->172.29.128.1:57523: use of closed network connection
	
	
	==> kube-controller-manager [9408ce83d7d3] <==
	I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	
	
	==> kube-proxy [148309413de8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [adf520f9b9d7] <==
	W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Feb 10 12:00:07 multinode-032400 kubelet[2265]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:00:07 multinode-032400 kubelet[2265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:00:07 multinode-032400 kubelet[2265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:01:07 multinode-032400 kubelet[2265]: E0210 12:01:07.750912    2265 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:01:07 multinode-032400 kubelet[2265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:01:07 multinode-032400 kubelet[2265]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:01:07 multinode-032400 kubelet[2265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:01:07 multinode-032400 kubelet[2265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:02:07 multinode-032400 kubelet[2265]: E0210 12:02:07.754559    2265 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:02:07 multinode-032400 kubelet[2265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:02:07 multinode-032400 kubelet[2265]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:02:07 multinode-032400 kubelet[2265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:02:07 multinode-032400 kubelet[2265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:03:07 multinode-032400 kubelet[2265]: E0210 12:03:07.751431    2265 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:03:07 multinode-032400 kubelet[2265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:03:07 multinode-032400 kubelet[2265]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:03:07 multinode-032400 kubelet[2265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:03:07 multinode-032400 kubelet[2265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:03:24 multinode-032400 kubelet[2265]: I0210 12:03:24.127071    2265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76fn6\" (UniqueName: \"kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6\") pod \"busybox-58667487b6-8shfg\" (UID: \"a3e86dc5-0523-4852-af77-3145d44eaa15\") " pod="default/busybox-58667487b6-8shfg"
	Feb 10 12:03:30 multinode-032400 kubelet[2265]: E0210 12:03:30.985520    2265 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44544->127.0.0.1:44211: write tcp 127.0.0.1:44544->127.0.0.1:44211: write: broken pipe
	Feb 10 12:04:07 multinode-032400 kubelet[2265]: E0210 12:04:07.750988    2265 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:04:07 multinode-032400 kubelet[2265]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:04:07 multinode-032400 kubelet[2265]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:04:07 multinode-032400 kubelet[2265]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:04:07 multinode-032400 kubelet[2265]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-032400 -n multinode-032400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-032400 -n multinode-032400: (11.1942954s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-032400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (55.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (516.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-032400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-032400
E0210 12:18:38.769261   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:18:55.683741   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:19:42.444022   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-032400: (1m36.3605198s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-032400 --wait=true -v=8 --alsologtostderr
E0210 12:22:45.534546   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:23:55.688031   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:24:42.447803   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-032400 --wait=true -v=8 --alsologtostderr: exit status 1 (6m5.735575s)

                                                
                                                
-- stdout --
	* [multinode-032400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-032400" primary control-plane node in "multinode-032400" cluster
	* Restarting existing hyperv VM for "multinode-032400" ...
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-032400-m02" worker node in "multinode-032400" cluster
	* Restarting existing hyperv VM for "multinode-032400-m02" ...
	* Found network options:
	  - NO_PROXY=172.29.129.181
	  - NO_PROXY=172.29.129.181
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	  - env NO_PROXY=172.29.129.181
	* Verifying Kubernetes components...
	
	* Starting "multinode-032400-m03" worker node in "multinode-032400" cluster
	* Restarting existing hyperv VM for "multinode-032400-m03" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:19:57.578884    5644 out.go:345] Setting OutFile to fd 1764 ...
	I0210 12:19:57.631465    5644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:19:57.631465    5644 out.go:358] Setting ErrFile to fd 780...
	I0210 12:19:57.631465    5644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:19:57.650332    5644 out.go:352] Setting JSON to false
	I0210 12:19:57.653542    5644 start.go:129] hostinfo: {"hostname":"minikube5","uptime":191337,"bootTime":1738998660,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 12:19:57.653542    5644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 12:19:57.707113    5644 out.go:177] * [multinode-032400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 12:19:57.720802    5644 notify.go:220] Checking for updates...
	I0210 12:19:57.763178    5644 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:19:57.777975    5644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:19:57.807721    5644 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 12:19:57.821042    5644 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 12:19:57.844719    5644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:19:57.863282    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:19:57.863581    5644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:20:03.010019    5644 out.go:177] * Using the hyperv driver based on existing profile
	I0210 12:20:03.063199    5644 start.go:297] selected driver: hyperv
	I0210 12:20:03.063199    5644 start.go:901] validating driver "hyperv" against &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:20:03.063582    5644 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:20:03.121424    5644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:20:03.121424    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:20:03.121424    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:20:03.122045    5644 start.go:340] cluster config:
	{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:20:03.122045    5644 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:20:03.280775    5644 out.go:177] * Starting "multinode-032400" primary control-plane node in "multinode-032400" cluster
	I0210 12:20:03.311514    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:20:03.311960    5644 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 12:20:03.311960    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:20:03.312450    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:20:03.312630    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:20:03.312630    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:20:03.314913    5644 start.go:360] acquireMachinesLock for multinode-032400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:20:03.315112    5644 start.go:364] duration metric: took 123.6µs to acquireMachinesLock for "multinode-032400"
	I0210 12:20:03.315200    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:20:03.315200    5644 fix.go:54] fixHost starting: 
	I0210 12:20:03.315907    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:05.914777    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:20:05.915831    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:05.915897    5644 fix.go:112] recreateIfNeeded on multinode-032400: state=Stopped err=<nil>
	W0210 12:20:05.915897    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:20:05.928203    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400" ...
	I0210 12:20:05.960927    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:08.807587    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:10.852616    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:10.853048    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:10.853232    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:13.163004    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:13.163004    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:14.164072    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:16.139165    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:16.139565    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:16.139565    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:18.443931    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:18.443931    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:19.446011    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:21.451344    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:21.451732    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:21.451732    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:23.783277    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:23.783338    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:24.783517    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:26.785238    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:26.785238    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:26.785295    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:29.062641    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:29.062719    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:30.063394    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:32.026713    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:32.026713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:32.027019    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:34.495276    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:34.495276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:34.497278    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:38.802589    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:38.802589    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:38.803140    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:20:38.804948    5644 machine.go:93] provisionDockerMachine start ...
	I0210 12:20:38.805050    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:43.047020    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:43.047020    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:43.051439    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:43.051439    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:43.052013    5644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:20:43.192843    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 12:20:43.192843    5644 buildroot.go:166] provisioning hostname "multinode-032400"
	I0210 12:20:43.192843    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:45.142944    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:45.142944    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:45.143198    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:47.456601    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:47.456601    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:47.460733    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:47.460733    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:47.460733    5644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400 && echo "multinode-032400" | sudo tee /etc/hostname
	I0210 12:20:47.636991    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400
	
	I0210 12:20:47.636991    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:49.588695    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:49.589077    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:49.589152    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:51.921453    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:51.921453    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:51.925341    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:51.925823    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:51.925823    5644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:20:52.083308    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:20:52.083417    5644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 12:20:52.083417    5644 buildroot.go:174] setting up certificates
	I0210 12:20:52.083550    5644 provision.go:84] configureAuth start
	I0210 12:20:52.083550    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:54.063570    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:54.064485    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:54.064485    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:56.374737    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:56.375309    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:56.375404    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:58.325938    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:58.325938    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:58.326886    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:00.674152    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:00.674867    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:00.674867    5644 provision.go:143] copyHostCerts
	I0210 12:21:00.675016    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 12:21:00.675090    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 12:21:00.675090    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 12:21:00.675090    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 12:21:00.676388    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 12:21:00.676560    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 12:21:00.676560    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 12:21:00.676796    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 12:21:00.677631    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 12:21:00.677785    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 12:21:00.677864    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 12:21:00.678113    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 12:21:00.678940    5644 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400 san=[127.0.0.1 172.29.129.181 localhost minikube multinode-032400]
	I0210 12:21:00.904994    5644 provision.go:177] copyRemoteCerts
	I0210 12:21:00.912869    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:21:00.912869    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:02.845039    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:02.845039    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:02.845703    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:05.162268    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:05.163187    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:05.163781    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:05.271361    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3584436s)
	I0210 12:21:05.271481    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 12:21:05.271636    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:21:05.318273    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 12:21:05.318273    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0210 12:21:05.364194    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 12:21:05.364637    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 12:21:05.408966    5644 provision.go:87] duration metric: took 13.3252675s to configureAuth
	I0210 12:21:05.409045    5644 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:21:05.409759    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:21:05.409818    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:07.365428    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:07.365428    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:07.366119    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:09.714377    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:09.714377    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:09.718506    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:09.718893    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:09.718893    5644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 12:21:09.854166    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 12:21:09.854231    5644 buildroot.go:70] root file system type: tmpfs
	I0210 12:21:09.854404    5644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 12:21:09.854467    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:11.808474    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:11.808474    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:11.809408    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:14.161928    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:14.162319    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:14.165955    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:14.166640    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:14.166640    5644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 12:21:14.333386    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 12:21:14.334268    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:16.282642    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:16.282642    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:16.282741    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:18.624134    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:18.624134    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:18.629267    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:18.629645    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:18.629645    5644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 12:21:21.134811    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 12:21:21.134811    5644 machine.go:96] duration metric: took 42.329393s to provisionDockerMachine
	I0210 12:21:21.134811    5644 start.go:293] postStartSetup for "multinode-032400" (driver="hyperv")
	I0210 12:21:21.134811    5644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:21:21.143069    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:21:21.143069    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:23.117764    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:23.117870    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:23.117870    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:25.439954    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:25.440879    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:25.440879    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:25.561375    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4182566s)
	I0210 12:21:25.569498    5644 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:21:25.576494    5644 command_runner.go:130] > NAME=Buildroot
	I0210 12:21:25.576494    5644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 12:21:25.576494    5644 command_runner.go:130] > ID=buildroot
	I0210 12:21:25.576494    5644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 12:21:25.576494    5644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 12:21:25.576494    5644 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:21:25.576494    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 12:21:25.577114    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 12:21:25.577230    5644 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 12:21:25.577668    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 12:21:25.586169    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 12:21:25.604342    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 12:21:25.649490    5644 start.go:296] duration metric: took 4.514593s for postStartSetup
	I0210 12:21:25.649626    5644 fix.go:56] duration metric: took 1m22.3335121s for fixHost
	I0210 12:21:25.649667    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:27.613655    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:27.614667    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:27.614822    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:29.966101    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:29.966101    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:29.969670    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:29.970260    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:29.970260    5644 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:21:30.105160    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739190090.106974362
	
	I0210 12:21:30.105160    5644 fix.go:216] guest clock: 1739190090.106974362
	I0210 12:21:30.105160    5644 fix.go:229] Guest: 2025-02-10 12:21:30.106974362 +0000 UTC Remote: 2025-02-10 12:21:25.6496267 +0000 UTC m=+88.153616101 (delta=4.457347662s)
	I0210 12:21:30.105160    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:32.052629    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:32.052629    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:32.053609    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:34.387515    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:34.388577    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:34.392418    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:34.393026    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:34.393026    5644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739190090
	I0210 12:21:34.548507    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 12:21:30 UTC 2025
	
	I0210 12:21:34.548507    5644 fix.go:236] clock set: Mon Feb 10 12:21:30 UTC 2025
	 (err=<nil>)
	I0210 12:21:34.548507    5644 start.go:83] releasing machines lock for "multinode-032400", held for 1m31.2322944s
	I0210 12:21:34.548507    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:36.486302    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:36.486565    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:36.486565    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:38.812615    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:38.812615    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:38.816072    5644 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 12:21:38.816215    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:38.824299    5644 ssh_runner.go:195] Run: cat /version.json
	I0210 12:21:38.824299    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:43.165463    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:43.165463    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:43.166320    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:43.185488    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:43.185488    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:43.185488    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:43.262831    5644 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0210 12:21:43.262831    5644 ssh_runner.go:235] Completed: cat /version.json: (4.4384829s)
	I0210 12:21:43.270240    5644 ssh_runner.go:195] Run: systemctl --version
	I0210 12:21:43.275956    5644 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 12:21:43.275956    5644 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4598348s)
	W0210 12:21:43.275956    5644 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 12:21:43.283755    5644 command_runner.go:130] > systemd 252 (252)
	I0210 12:21:43.283755    5644 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0210 12:21:43.293242    5644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:21:43.301351    5644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0210 12:21:43.301883    5644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:21:43.310011    5644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:21:43.337342    5644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 12:21:43.337794    5644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:21:43.337794    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:21:43.338053    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:21:43.371079    5644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 12:21:43.379856    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 12:21:43.387359    5644 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 12:21:43.387359    5644 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 12:21:43.408371    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:21:43.429852    5644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:21:43.441849    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:21:43.478337    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:21:43.507578    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:21:43.536429    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:21:43.566958    5644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:21:43.595675    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:21:43.623687    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:21:43.651529    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:21:43.677590    5644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:21:43.695433    5644 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:21:43.695510    5644 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:21:43.703726    5644 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:21:43.732726    5644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:21:43.762380    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:43.946917    5644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:21:43.976787    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:21:43.986197    5644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 12:21:44.012344    5644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 12:21:44.012344    5644 command_runner.go:130] > [Unit]
	I0210 12:21:44.012344    5644 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 12:21:44.012344    5644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 12:21:44.012344    5644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 12:21:44.012344    5644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 12:21:44.012344    5644 command_runner.go:130] > StartLimitBurst=3
	I0210 12:21:44.012344    5644 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 12:21:44.012344    5644 command_runner.go:130] > [Service]
	I0210 12:21:44.012344    5644 command_runner.go:130] > Type=notify
	I0210 12:21:44.012344    5644 command_runner.go:130] > Restart=on-failure
	I0210 12:21:44.012344    5644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 12:21:44.012883    5644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 12:21:44.012883    5644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 12:21:44.012883    5644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 12:21:44.012883    5644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 12:21:44.012883    5644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 12:21:44.012883    5644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 12:21:44.012996    5644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 12:21:44.012996    5644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 12:21:44.012996    5644 command_runner.go:130] > ExecStart=
	I0210 12:21:44.012996    5644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 12:21:44.013084    5644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 12:21:44.013084    5644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitNOFILE=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitNPROC=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitCORE=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 12:21:44.013084    5644 command_runner.go:130] > TasksMax=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > TimeoutStartSec=0
	I0210 12:21:44.013084    5644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 12:21:44.013084    5644 command_runner.go:130] > Delegate=yes
	I0210 12:21:44.013084    5644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 12:21:44.013084    5644 command_runner.go:130] > KillMode=process
	I0210 12:21:44.013084    5644 command_runner.go:130] > [Install]
	I0210 12:21:44.013084    5644 command_runner.go:130] > WantedBy=multi-user.target
	I0210 12:21:44.022094    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:21:44.053114    5644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:21:44.090425    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:21:44.121358    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:21:44.152819    5644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:21:44.210949    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:21:44.234437    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:21:44.266558    5644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 12:21:44.277138    5644 ssh_runner.go:195] Run: which cri-dockerd
	I0210 12:21:44.282708    5644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 12:21:44.292452    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 12:21:44.311196    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 12:21:44.350600    5644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 12:21:44.544376    5644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 12:21:44.749724    5644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 12:21:44.749724    5644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 12:21:44.790452    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:44.984206    5644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 12:21:47.653313    5644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6690775s)
	I0210 12:21:47.662643    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 12:21:47.693320    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:21:47.724728    5644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 12:21:47.920192    5644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 12:21:48.097241    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:48.282606    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 12:21:48.320811    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:21:48.353054    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:48.546204    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 12:21:48.652185    5644 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 12:21:48.662453    5644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 12:21:48.671127    5644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 12:21:48.671173    5644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 12:21:48.671205    5644 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0210 12:21:48.671205    5644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 12:21:48.671205    5644 command_runner.go:130] > Access: 2025-02-10 12:21:48.585360210 +0000
	I0210 12:21:48.671205    5644 command_runner.go:130] > Modify: 2025-02-10 12:21:48.585360210 +0000
	I0210 12:21:48.671205    5644 command_runner.go:130] > Change: 2025-02-10 12:21:48.588360354 +0000
	I0210 12:21:48.671264    5644 command_runner.go:130] >  Birth: -
	I0210 12:21:48.671298    5644 start.go:563] Will wait 60s for crictl version
	I0210 12:21:48.678779    5644 ssh_runner.go:195] Run: which crictl
	I0210 12:21:48.685382    5644 command_runner.go:130] > /usr/bin/crictl
	I0210 12:21:48.695251    5644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:21:48.751805    5644 command_runner.go:130] > Version:  0.1.0
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeName:  docker
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 12:21:48.751896    5644 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 12:21:48.758474    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:21:48.791714    5644 command_runner.go:130] > 27.4.0
	I0210 12:21:48.802060    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:21:48.836905    5644 command_runner.go:130] > 27.4.0
	I0210 12:21:48.838600    5644 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 12:21:48.839975    5644 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 12:21:48.846104    5644 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 12:21:48.846104    5644 ip.go:214] interface addr: 172.29.128.1/20
	I0210 12:21:48.853658    5644 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 12:21:48.860206    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:21:48.881782    5644 kubeadm.go:883] updating cluster {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fal
se istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:21:48.882095    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:21:48.889611    5644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 12:21:48.913218    5644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0210 12:21:48.914239    5644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:21:48.914239    5644 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0210 12:21:48.914239    5644 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0210 12:21:48.914239    5644 docker.go:619] Images already preloaded, skipping extraction
	I0210 12:21:48.921891    5644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 12:21:48.947204    5644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0210 12:21:48.947204    5644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0210 12:21:48.947293    5644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:21:48.947293    5644 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0210 12:21:48.947293    5644 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0210 12:21:48.947434    5644 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:21:48.947469    5644 kubeadm.go:934] updating node { 172.29.129.181 8443 v1.32.1 docker true true} ...
	I0210 12:21:48.947678    5644 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.129.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:21:48.956603    5644 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0210 12:21:49.019097    5644 command_runner.go:130] > cgroupfs
	I0210 12:21:49.021088    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:21:49.021189    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:21:49.021189    5644 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:21:49.021189    5644 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.129.181 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-032400 NodeName:multinode-032400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.129.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.129.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:21:49.021471    5644 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.129.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-032400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.129.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:21:49.030818    5644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:21:49.059023    5644 command_runner.go:130] > kubeadm
	I0210 12:21:49.059112    5644 command_runner.go:130] > kubectl
	I0210 12:21:49.059112    5644 command_runner.go:130] > kubelet
	I0210 12:21:49.059213    5644 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:21:49.066897    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:21:49.084845    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 12:21:49.115566    5644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:21:49.144925    5644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0210 12:21:49.185214    5644 ssh_runner.go:195] Run: grep 172.29.129.181	control-plane.minikube.internal$ /etc/hosts
	I0210 12:21:49.191138    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.129.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:21:49.220877    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:49.414971    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:21:49.442504    5644 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.129.181
	I0210 12:21:49.442504    5644 certs.go:194] generating shared ca certs ...
	I0210 12:21:49.442504    5644 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.444000    5644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 12:21:49.444390    5644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 12:21:49.444514    5644 certs.go:256] generating profile certs ...
	I0210 12:21:49.445114    5644 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.key
	I0210 12:21:49.445222    5644 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d
	I0210 12:21:49.445222    5644 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.129.181]
	I0210 12:21:49.625501    5644 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d ...
	I0210 12:21:49.625501    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d: {Name:mkdf52c332ce3be44472e32ef1425e0bace63214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.627403    5644 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d ...
	I0210 12:21:49.627403    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d: {Name:mk37c561ceb16c113cacfa4d153c64399d5339b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.628394    5644 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt
	I0210 12:21:49.644387    5644 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key
	I0210 12:21:49.644841    5644 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key
	I0210 12:21:49.644841    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 12:21:49.644841    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 12:21:49.645725    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 12:21:49.645725    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 12:21:49.645860    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 12:21:49.646019    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 12:21:49.646243    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 12:21:49.646890    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 12:21:49.647069    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 12:21:49.647495    5644 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 12:21:49.647495    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 12:21:49.649214    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:21:49.700052    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:21:49.745099    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:21:49.789120    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:21:49.837845    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 12:21:49.883491    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 12:21:49.930844    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:21:49.976563    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 12:21:50.022268    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 12:21:50.071783    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:21:50.116006    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 12:21:50.158805    5644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:21:50.203279    5644 ssh_runner.go:195] Run: openssl version
	I0210 12:21:50.212022    5644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 12:21:50.220408    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 12:21:50.248617    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.254174    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.254174    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.262376    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.270644    5644 command_runner.go:130] > 3ec20f2e
	I0210 12:21:50.279601    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 12:21:50.305282    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:21:50.332409    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.339591    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.339633    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.347869    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.356371    5644 command_runner.go:130] > b5213941
	I0210 12:21:50.364087    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:21:50.391358    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 12:21:50.419761    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.426923    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.426923    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.435416    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.444020    5644 command_runner.go:130] > 51391683
	I0210 12:21:50.453867    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 12:21:50.480683    5644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:21:50.488096    5644 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:21:50.488096    5644 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0210 12:21:50.488096    5644 command_runner.go:130] > Device: 8,1	Inode: 531041      Links: 1
	I0210 12:21:50.488096    5644 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0210 12:21:50.488096    5644 command_runner.go:130] > Access: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] > Modify: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] > Change: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] >  Birth: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.496570    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 12:21:50.507079    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.514991    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 12:21:50.525926    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.533676    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 12:21:50.543630    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.550804    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 12:21:50.560642    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.568582    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 12:21:50.577753    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.585647    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 12:21:50.594838    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.594838    5644 kubeadm.go:392] StartCluster: {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:21:50.601478    5644 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 12:21:50.639879    5644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/minikube/etcd:
	I0210 12:21:50.658306    5644 command_runner.go:130] > member
	I0210 12:21:50.658306    5644 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 12:21:50.658306    5644 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 12:21:50.666853    5644 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 12:21:50.684872    5644 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:21:50.684872    5644 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-032400" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:21:50.686528    5644 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-032400" cluster setting kubeconfig missing "multinode-032400" context setting]
	I0210 12:21:50.687399    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:50.705569    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:21:50.706175    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:21:50.707333    5644 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 12:21:50.707447    5644 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 12:21:50.716230    5644 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 12:21:50.734963    5644 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0210 12:21:50.734963    5644 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0210 12:21:50.734963    5644 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0210 12:21:50.734963    5644 command_runner.go:130] >  kind: InitConfiguration
	I0210 12:21:50.734963    5644 command_runner.go:130] >  localAPIEndpoint:
	I0210 12:21:50.734963    5644 command_runner.go:130] > -  advertiseAddress: 172.29.136.201
	I0210 12:21:50.734963    5644 command_runner.go:130] > +  advertiseAddress: 172.29.129.181
	I0210 12:21:50.734963    5644 command_runner.go:130] >    bindPort: 8443
	I0210 12:21:50.734963    5644 command_runner.go:130] >  bootstrapTokens:
	I0210 12:21:50.734963    5644 command_runner.go:130] >    - groups:
	I0210 12:21:50.734963    5644 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0210 12:21:50.734963    5644 command_runner.go:130] >    name: "multinode-032400"
	I0210 12:21:50.734963    5644 command_runner.go:130] >    kubeletExtraArgs:
	I0210 12:21:50.734963    5644 command_runner.go:130] >      - name: "node-ip"
	I0210 12:21:50.734963    5644 command_runner.go:130] > -      value: "172.29.136.201"
	I0210 12:21:50.734963    5644 command_runner.go:130] > +      value: "172.29.129.181"
	I0210 12:21:50.734963    5644 command_runner.go:130] >    taints: []
	I0210 12:21:50.734963    5644 command_runner.go:130] >  ---
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0210 12:21:50.734963    5644 command_runner.go:130] >  kind: ClusterConfiguration
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiServer:
	I0210 12:21:50.734963    5644 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.136.201"]
	I0210 12:21:50.734963    5644 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	I0210 12:21:50.734963    5644 command_runner.go:130] >    extraArgs:
	I0210 12:21:50.734963    5644 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0210 12:21:50.734963    5644 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0210 12:21:50.734963    5644 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.136.201
	+  advertiseAddress: 172.29.129.181
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-032400"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.29.136.201"
	+      value: "172.29.129.181"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.136.201"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0210 12:21:50.734963    5644 kubeadm.go:1160] stopping kube-system containers ...
	I0210 12:21:50.742932    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 12:21:50.772146    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:21:50.772146    5644 command_runner.go:130] > 182c8395f5e1
	I0210 12:21:50.772146    5644 command_runner.go:130] > 794995bca6b5
	I0210 12:21:50.772146    5644 command_runner.go:130] > 4ccc0a4e7b5c
	I0210 12:21:50.772146    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:21:50.772146    5644 command_runner.go:130] > 148309413de8
	I0210 12:21:50.772146    5644 command_runner.go:130] > 26d9e119a02c
	I0210 12:21:50.772146    5644 command_runner.go:130] > a70f430921ec
	I0210 12:21:50.772146    5644 command_runner.go:130] > 9f1c4e9b3353
	I0210 12:21:50.772146    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:21:50.772146    5644 command_runner.go:130] > 3ae31c3c37c9
	I0210 12:21:50.772146    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:21:50.772146    5644 command_runner.go:130] > 8c55184f16cc
	I0210 12:21:50.772146    5644 command_runner.go:130] > d33433fbce48
	I0210 12:21:50.772146    5644 command_runner.go:130] > b2de8e426f22
	I0210 12:21:50.772146    5644 command_runner.go:130] > ee16b295f58d
	I0210 12:21:50.772146    5644 docker.go:483] Stopping containers: [c5b854dbb912 182c8395f5e1 794995bca6b5 4ccc0a4e7b5c 4439940fa5f4 148309413de8 26d9e119a02c a70f430921ec 9f1c4e9b3353 adf520f9b9d7 3ae31c3c37c9 9408ce83d7d3 8c55184f16cc d33433fbce48 b2de8e426f22 ee16b295f58d]
	I0210 12:21:50.778640    5644 ssh_runner.go:195] Run: docker stop c5b854dbb912 182c8395f5e1 794995bca6b5 4ccc0a4e7b5c 4439940fa5f4 148309413de8 26d9e119a02c a70f430921ec 9f1c4e9b3353 adf520f9b9d7 3ae31c3c37c9 9408ce83d7d3 8c55184f16cc d33433fbce48 b2de8e426f22 ee16b295f58d
	I0210 12:21:50.808074    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:21:50.808074    5644 command_runner.go:130] > 182c8395f5e1
	I0210 12:21:50.808074    5644 command_runner.go:130] > 794995bca6b5
	I0210 12:21:50.808074    5644 command_runner.go:130] > 4ccc0a4e7b5c
	I0210 12:21:50.808074    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:21:50.808074    5644 command_runner.go:130] > 148309413de8
	I0210 12:21:50.808074    5644 command_runner.go:130] > 26d9e119a02c
	I0210 12:21:50.808074    5644 command_runner.go:130] > a70f430921ec
	I0210 12:21:50.808074    5644 command_runner.go:130] > 9f1c4e9b3353
	I0210 12:21:50.808074    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:21:50.808074    5644 command_runner.go:130] > 3ae31c3c37c9
	I0210 12:21:50.808074    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:21:50.808074    5644 command_runner.go:130] > 8c55184f16cc
	I0210 12:21:50.808074    5644 command_runner.go:130] > d33433fbce48
	I0210 12:21:50.808074    5644 command_runner.go:130] > b2de8e426f22
	I0210 12:21:50.808074    5644 command_runner.go:130] > ee16b295f58d
	I0210 12:21:50.817248    5644 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 12:21:50.853993    5644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:21:50.872453    5644 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:21:50.872453    5644 kubeadm.go:157] found existing configuration files:
	
	I0210 12:21:50.880551    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:21:50.896516    5644 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:21:50.896516    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:21:50.904651    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:21:50.929191    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:21:50.945835    5644 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:21:50.945835    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:21:50.954114    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:21:50.978574    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:21:50.994849    5644 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:21:50.994946    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:21:51.002633    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:21:51.028334    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:21:51.044747    5644 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:21:51.044747    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:21:51.052847    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:21:51.079578    5644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:21:51.095962    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using the existing "sa" key
	I0210 12:21:51.309867    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.401550    5644 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:21:52.401737    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0918579s)
	I0210 12:21:52.401788    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 12:21:52.702444    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.802951    5644 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:21:52.802951    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:21:52.803032    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:21:52.803032    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:21:52.803070    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.911975    5644 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:21:52.911975    5644 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:21:52.921405    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:53.420837    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:53.919851    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.421858    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.921858    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.944506    5644 command_runner.go:130] > 2008
	I0210 12:21:54.944576    5644 api_server.go:72] duration metric: took 2.032508s to wait for apiserver process to appear ...
	I0210 12:21:54.944576    5644 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:21:54.944640    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.084515    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 12:21:58.084515    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 12:21:58.084680    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.114604    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 12:21:58.114604    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 12:21:58.444742    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.453115    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:58.453115    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:58.945835    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.956371    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:58.956442    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:59.444818    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:59.457330    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:59.457330    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:59.945197    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:59.954969    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:21:59.955161    5644 discovery_client.go:658] "Request Body" body=""
	I0210 12:21:59.955201    5644 round_trippers.go:470] GET https://172.29.129.181:8443/version
	I0210 12:21:59.955272    5644 round_trippers.go:476] Request Headers:
	I0210 12:21:59.955300    5644 round_trippers.go:480]     Accept: application/json, */*
	I0210 12:21:59.955300    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:21:59.966409    5644 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 12:21:59.966409    5644 round_trippers.go:584] Response Headers:
	I0210 12:21:59.966409    5644 round_trippers.go:587]     Content-Length: 263
	I0210 12:21:59.966409    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:21:59 GMT
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Audit-Id: 5c48a883-3089-4412-89ce-073752a34ebe
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Content-Type: application/json
	I0210 12:21:59.966481    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:21:59.966481    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:21:59.966537    5644 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.1",
		  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
		  "gitTreeState": "clean",
		  "buildDate": "2025-01-15T14:31:55Z",
		  "goVersion": "go1.23.4",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0210 12:21:59.966537    5644 api_server.go:141] control plane version: v1.32.1
	I0210 12:21:59.966537    5644 api_server.go:131] duration metric: took 5.0219059s to wait for apiserver health ...
	I0210 12:21:59.966537    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:21:59.966537    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:21:59.969769    5644 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 12:21:59.979853    5644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 12:21:59.987643    5644 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0210 12:21:59.987643    5644 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0210 12:21:59.987643    5644 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0210 12:21:59.987643    5644 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0210 12:21:59.987643    5644 command_runner.go:130] > Access: 2025-02-10 12:20:34.686796900 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] > Change: 2025-02-10 12:20:23.050000000 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] >  Birth: -
	I0210 12:21:59.987643    5644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 12:21:59.987643    5644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 12:22:00.034391    5644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 12:22:01.017263    5644 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0210 12:22:01.017325    5644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0210 12:22:01.017325    5644 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0210 12:22:01.017392    5644 command_runner.go:130] > daemonset.apps/kindnet configured
	I0210 12:22:01.017442    5644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:22:01.017797    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.017903    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:01.017940    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.017940    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.017940    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.024267    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:01.024622    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Audit-Id: 7133dc72-e0e9-491b-9795-b0fef7fb64f7
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.024622    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.024622    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.027230    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 b6 f4 03 0a  0a 0a 00 12 04 31 38 34  |ist..........184|
		00000020  30 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |0....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 315435 chars]
	 >
	I0210 12:22:01.028108    5644 system_pods.go:59] 12 kube-system pods found
	I0210 12:22:01.028175    5644 system_pods.go:61] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 12:22:01.028175    5644 system_pods.go:61] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:22:01.028236    5644 system_pods.go:61] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 12:22:01.028288    5644 system_pods.go:61] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 12:22:01.028288    5644 system_pods.go:74] duration metric: took 10.8464ms to wait for pod list to return data ...
	I0210 12:22:01.028354    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:22:01.028463    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.028486    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:22:01.028486    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.028486    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.028486    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.036769    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:01.036769    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.036769    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.036769    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.037729    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Audit-Id: ece831f4-f081-4eef-9546-3a08239bba6c
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.037729    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ef 5e 0a  0a 0a 00 12 04 31 38 34  |List..^......184|
		00000020  30 1a 00 12 d5 25 0a f8  11 0a 10 6d 75 6c 74 69  |0....%.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 37 36 32 38 00 42  |1e01b262.17628.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 59089 chars]
	 >
	I0210 12:22:01.037729    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038747    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038808    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038808    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038865    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038865    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038865    5644 node_conditions.go:105] duration metric: took 10.5108ms to run NodePressure ...
	I0210 12:22:01.038988    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:22:01.389598    5644 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0210 12:22:01.617942    5644 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0210 12:22:01.620378    5644 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 12:22:01.620540    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.620626    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0210 12:22:01.620626    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.620693    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.620693    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.624944    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:01.625845    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Audit-Id: 22603f78-b146-4e0b-a6d4-76c5eaf1493b
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.625845    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.625845    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.627232    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 8f bd 01 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  35 1a 00 12 ad 2d 0a d9  1a 0a 15 65 74 63 64 2d  |5....-.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 30 33 32 34 30 30  |multinode-032400|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 32 36 64 34 31  31 30 66 2d 39 61 33 39  |.*$26d4110f-9a39|
		00000060  2d 34 38 64 65 2d 61 34  33 33 2d 35 36 37 61 37  |-48de-a433-567a7|
		00000070  35 37 38 39 62 65 30 32  04 31 38 31 32 38 00 42  |5789be02.18128.B|
		00000080  08 08 e6 de a7 bd 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 4f 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebO.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 118655 chars]
	 >
	I0210 12:22:01.627944    5644 kubeadm.go:739] kubelet initialised
	I0210 12:22:01.627989    5644 kubeadm.go:740] duration metric: took 7.6112ms waiting for restarted kubelet to initialise ...
	I0210 12:22:01.628044    5644 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:01.628201    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.628312    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:01.628368    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.628368    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.628434    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.633500    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:01.633500    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.633500    5644 round_trippers.go:587]     Audit-Id: 6761eb08-f31d-401e-9c48-495dbcaa8f15
	I0210 12:22:01.633500    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.633571    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.633571    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.633571    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.633571    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.636591    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9b f0 03 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  35 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |5....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 312754 chars]
	 >
	I0210 12:22:01.637286    5644 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.637341    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.637425    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:01.637492    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.637492    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.637492    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.652412    5644 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 12:22:01.652668    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.652668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.652668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Audit-Id: e43f9258-de17-4748-938c-8ffd3f3efad2
	I0210 12:22:01.653109    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:01.653348    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.653455    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.653455    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.653455    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.653535    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.656691    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:01.656691    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.656903    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.656903    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Audit-Id: 9b6271b6-10db-4ece-9ab5-5f2cb391cf62
	I0210 12:22:01.657228    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.657380    5644 pod_ready.go:98] node "multinode-032400" hosting pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.657463    5644 pod_ready.go:82] duration metric: took 20.1221ms for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.657463    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.657463    5644 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.657612    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.657636    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:22:01.657692    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.657692    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.657736    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.659797    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.659797    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.659797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.659797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Audit-Id: 5e3fa176-fac7-4476-9419-576207977b28
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.660107    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.660458    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ad 2d 0a d9 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.-.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 31  32 38 00 42 08 08 e6 de  |be02.18128.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27798 chars]
	 >
	I0210 12:22:01.660713    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.660796    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.660830    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.660853    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.660853    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.670795    5644 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 12:22:01.670795    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.670795    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Audit-Id: 009b6a4e-d9c8-49dc-b5a1-a959b5ef507a
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.670795    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.671805    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.671805    5644 pod_ready.go:98] node "multinode-032400" hosting pod "etcd-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.671805    5644 pod_ready.go:82] duration metric: took 14.3418ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.671805    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "etcd-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.671805    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.671805    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.671805    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:22:01.671805    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.671805    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.671805    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.675741    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.675767    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Audit-Id: d69a5666-1be6-4da0-b8d4-2b6601449db8
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.675834    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.675834    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.676377    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 36 0a e9 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 31 31 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8118.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33804 chars]
	 >
	I0210 12:22:01.676594    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.676653    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.676653    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.676653    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.676727    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.682786    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:01.682786    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.682786    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.683357    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.683357    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Audit-Id: 85f72193-cbb6-4afe-86d7-b962187756a3
	I0210 12:22:01.683660    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.683821    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-apiserver-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.683821    5644 pod_ready.go:82] duration metric: took 12.0158ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.683891    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-apiserver-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.683891    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.683951    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.684014    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:22:01.684014    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.684014    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.684014    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.687813    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:01.687813    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.687813    5644 round_trippers.go:587]     Audit-Id: d05685b2-8ffe-48d1-9037-dae32ff2a9a1
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.687900    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.687900    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.688679    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b1 33 0a d5 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 31 30 38 00 42 08  |9fb4412.18108.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31594 chars]
	 >
	I0210 12:22:01.688891    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.688973    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.689091    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.689091    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.689091    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.691800    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.691800    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.691800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.691800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Audit-Id: a139c2a2-e3b7-4bb1-95de-6c711636e46e
	I0210 12:22:01.691800    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.691800    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-controller-manager-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.691800    5644 pod_ready.go:82] duration metric: took 7.9091ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.691800    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-controller-manager-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.691800    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.691800    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.822378    5644 request.go:661] Waited for 130.5768ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:22:01.822378    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:22:01.822378    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.822378    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.822378    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.826767    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:01.826854    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.826854    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Audit-Id: d972e39d-57f5-40ea-97e3-bd8d24011f3f
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.826854    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.827328    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:22:01.827506    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.021410    5644 request.go:661] Waited for 193.9028ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:02.021890    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:02.021935    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.022008    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.022008    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.024792    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:02.025684    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Audit-Id: 5547391c-e589-4649-b88c-fe1cd1ade140
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.025684    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.025684    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.025988    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:02.026123    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-proxy-rrh82" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:02.026189    5644 pod_ready.go:82] duration metric: took 334.3852ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:02.026189    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-proxy-rrh82" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:02.026189    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.026301    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.220942    5644 request.go:661] Waited for 194.639ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:22:02.221216    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:22:02.221216    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.221216    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.221216    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.224751    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:02.224751    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Audit-Id: a430131e-7290-46a3-8378-1874e4ed1dd4
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.224751    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.224751    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.225476    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:22:02.225681    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.421572    5644 request.go:661] Waited for 195.8889ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:22:02.422063    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:22:02.422063    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.422063    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.422063    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.426120    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:02.426120    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.426120    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Audit-Id: 63310cb4-e21f-4bf0-b4fa-a4436afd2f79
	I0210 12:22:02.426297    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.426297    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.426297    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.426571    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:22:02.426865    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:22:02.426865    5644 pod_ready.go:82] duration metric: took 400.6718ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:02.426865    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:22:02.426865    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.426965    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.621158    5644 request.go:661] Waited for 194.0835ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:22:02.621158    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:22:02.621158    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.621158    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.621158    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.625783    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:02.625839    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Audit-Id: ce403f58-8e8a-4d9b-ab86-b591bcbaefc2
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.625909    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.625909    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.625909    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.626481    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 03 36 33 35 38 00  |0d435af832.6358.|
		00000070  42 08 08 d0 d5 a7 bd 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  35 36 36 64 37 62 39 66  |n-hash..566d7b9f|
		000000a0  38 35 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |85Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22671 chars]
	 >
	I0210 12:22:02.626591    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.820926    5644 request.go:661] Waited for 194.3329ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:22:02.821244    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:22:02.821244    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.821244    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.821244    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.827886    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:02.827886    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.827886    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Content-Length: 3464
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Audit-Id: 926d335f-6a41-4edc-a75f-e935e0330864
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.827886    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.827886    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f1 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 04 31 36 38 32 38 00  |b7a9af0e2.16828.|
		00000060  42 08 08 d0 d5 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16111 chars]
	 >
	I0210 12:22:02.827886    5644 pod_ready.go:93] pod "kube-proxy-xltxj" in "kube-system" namespace has status "Ready":"True"
	I0210 12:22:02.827886    5644 pod_ready.go:82] duration metric: took 400.9163ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.827886    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.827886    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.022004    5644 request.go:661] Waited for 194.1156ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:22:03.022004    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:22:03.022004    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.022004    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.022004    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.027621    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:03.027692    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.027692    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.027692    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Audit-Id: c65ecd29-6e7c-4893-8023-1a64bae0b0dc
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.029437    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 25 0a bd 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.%.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 30 37 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8078.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 22676 chars]
	 >
	I0210 12:22:03.029690    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.220667    5644 request.go:661] Waited for 190.9086ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.220667    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.220667    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.221275    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.221275    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.225458    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:03.225458    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Audit-Id: 81ebe6dc-aded-491a-af93-9c0264613f58
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.225458    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.225458    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.225832    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:03.226059    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-scheduler-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:03.226089    5644 pod_ready.go:82] duration metric: took 398.1983ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:03.226089    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-scheduler-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:03.226089    5644 pod_ready.go:39] duration metric: took 1.5979721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:03.226089    5644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:22:03.242649    5644 command_runner.go:130] > -16
	I0210 12:22:03.243171    5644 ops.go:34] apiserver oom_adj: -16
	I0210 12:22:03.243171    5644 kubeadm.go:597] duration metric: took 12.5847251s to restartPrimaryControlPlane
	I0210 12:22:03.243171    5644 kubeadm.go:394] duration metric: took 12.6481931s to StartCluster
	I0210 12:22:03.243171    5644 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:22:03.243426    5644 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:22:03.245169    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:22:03.246385    5644 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 12:22:03.246385    5644 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 12:22:03.251667    5644 out.go:177] * Verifying Kubernetes components...
	I0210 12:22:03.246916    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:22:03.253672    5644 out.go:177] * Enabled addons: 
	I0210 12:22:03.259671    5644 addons.go:514] duration metric: took 13.3865ms for enable addons: enabled=[]
	I0210 12:22:03.264322    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:22:03.510937    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:22:03.536618    5644 node_ready.go:35] waiting up to 6m0s for node "multinode-032400" to be "Ready" ...
	I0210 12:22:03.536767    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.536903    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.536903    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.536903    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.536903    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.540238    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:03.540238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Audit-Id: fa6ded42-37b6-42de-ae27-4373706be825
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.540238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.540932    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.540932    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.541202    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:04.037321    5644 type.go:168] "Request Body" body=""
	I0210 12:22:04.037321    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:04.037321    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:04.037321    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:04.037321    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:04.041742    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:04.041816    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Audit-Id: abd2b64a-44fa-409b-a3c8-f28c2104a97d
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:04.041816    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:04.041816    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:04.041894    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:04 GMT
	I0210 12:22:04.042592    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:04.537270    5644 type.go:168] "Request Body" body=""
	I0210 12:22:04.537270    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:04.537722    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:04.537722    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:04.537722    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:04.542227    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:04.542296    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:04.542296    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:04.542296    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:04 GMT
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Audit-Id: 51fbcb0c-18b6-4737-bb65-534b8a59ee1b
	I0210 12:22:04.542763    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.037720    5644 type.go:168] "Request Body" body=""
	I0210 12:22:05.037883    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:05.037883    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:05.037883    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:05.037883    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:05.045193    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:05.045193    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:05.045193    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:05.045193    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:05 GMT
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Audit-Id: 4fc3f950-2374-4f26-8b21-b2272364078f
	I0210 12:22:05.045464    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.537462    5644 type.go:168] "Request Body" body=""
	I0210 12:22:05.537462    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:05.537462    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:05.537462    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:05.537462    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:05.541169    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:05.541169    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Audit-Id: 6b39847c-fb75-45ee-a86e-0fa0b9716c77
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:05.541169    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:05.541169    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:05 GMT
	I0210 12:22:05.541606    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.541815    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:06.037529    5644 type.go:168] "Request Body" body=""
	I0210 12:22:06.037529    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:06.037529    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:06.037529    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:06.037529    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:06.041672    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:06.041672    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Audit-Id: bf5b2110-06ee-4550-9bc9-134874981b51
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:06.041672    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:06.041672    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:06.041904    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:06 GMT
	I0210 12:22:06.042173    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:06.537134    5644 type.go:168] "Request Body" body=""
	I0210 12:22:06.537643    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:06.537643    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:06.537643    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:06.537643    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:06.545049    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:06.545254    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:06.545254    5644 round_trippers.go:587]     Audit-Id: be88c43b-2bfb-4a92-b0e4-774c4b8ed8c2
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:06.545310    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:06.545310    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:06 GMT
	I0210 12:22:06.545716    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.037385    5644 type.go:168] "Request Body" body=""
	I0210 12:22:07.037594    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:07.037594    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:07.037594    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:07.037594    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:07.040885    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:07.041302    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Audit-Id: 64e4567d-030a-4ff4-b8bc-0886ef4c407a
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:07.041302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:07.041302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:07 GMT
	I0210 12:22:07.041629    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.537694    5644 type.go:168] "Request Body" body=""
	I0210 12:22:07.538179    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:07.538259    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:07.538259    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:07.538259    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:07.546279    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:07.546279    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:07.546279    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:07.546279    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:07 GMT
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Audit-Id: a28bc9dd-395b-4895-b3c6-dbd8a334a7c7
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:07.546279    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.546279    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:08.037341    5644 type.go:168] "Request Body" body=""
	I0210 12:22:08.037909    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:08.037909    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:08.037909    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:08.037909    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:08.041690    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:08.041690    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:08.041690    5644 round_trippers.go:587]     Audit-Id: 77899642-a8d7-4018-b102-91907afd4444
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:08.041763    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:08.041763    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:08 GMT
	I0210 12:22:08.042601    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:08.537088    5644 type.go:168] "Request Body" body=""
	I0210 12:22:08.537088    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:08.537088    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:08.537427    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:08.537427    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:08.541504    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:08.541504    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:08.541504    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:08.541504    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:08 GMT
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Audit-Id: d299b160-cef4-4c83-9753-1eeb230ab6de
	I0210 12:22:08.541504    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:09.037125    5644 type.go:168] "Request Body" body=""
	I0210 12:22:09.037125    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:09.037125    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:09.037125    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:09.037125    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:09.043980    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:09.043980    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Audit-Id: 9341f127-e561-4b1d-99d8-e651eee068e5
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:09.043980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:09.043980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:09 GMT
	I0210 12:22:09.044537    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:09.537949    5644 type.go:168] "Request Body" body=""
	I0210 12:22:09.538067    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:09.538067    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:09.538067    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:09.538067    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:09.541508    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:09.541508    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:09.541508    5644 round_trippers.go:587]     Audit-Id: 5d30e869-44a7-4d4b-8720-2aa4e6554a09
	I0210 12:22:09.541508    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:09.542088    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:09.542088    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:09.542088    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:09.542088    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:09 GMT
	I0210 12:22:09.542353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:10.037165    5644 type.go:168] "Request Body" body=""
	I0210 12:22:10.037165    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:10.037165    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:10.037165    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:10.037165    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:10.041588    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:10.041588    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:10.041588    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:10.041588    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:10 GMT
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Audit-Id: e3cf349f-f9e8-4a72-846d-095f4465c548
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:10.042059    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:10.042378    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:10.538442    5644 type.go:168] "Request Body" body=""
	I0210 12:22:10.539157    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:10.539157    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:10.539157    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:10.539157    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:10.543238    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:10.543238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Audit-Id: 2ac4f8f2-9074-439b-8c13-d954dbd918f2
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:10.543238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:10.543238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:10 GMT
	I0210 12:22:10.543566    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:11.037528    5644 type.go:168] "Request Body" body=""
	I0210 12:22:11.037528    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:11.037528    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:11.037528    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:11.037528    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:11.041768    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:11.042059    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:11.042059    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:11.042059    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:11 GMT
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Audit-Id: e192e15b-0849-427f-abd9-4a2c39c4cc42
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:11.042606    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:11.536828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:11.537436    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:11.537436    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:11.537531    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:11.537580    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:11.541245    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:11.541245    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:11.541245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:11.541245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:11 GMT
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Audit-Id: e54b2da1-4c31-4151-80be-1cf0b5d0a915
	I0210 12:22:11.541960    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:12.037307    5644 type.go:168] "Request Body" body=""
	I0210 12:22:12.037307    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:12.037307    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:12.037307    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:12.037307    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:12.041927    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:12.041927    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:12.041927    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:12.041927    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:12.041927    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:12 GMT
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Audit-Id: 84e5b282-5da0-4f1b-a4f7-e6df27b7e40c
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:12.042387    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:12.042600    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:12.537691    5644 type.go:168] "Request Body" body=""
	I0210 12:22:12.537691    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:12.537691    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:12.537691    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:12.537691    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:12.541522    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:12.541522    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:12.541522    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:12.541522    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:12 GMT
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Audit-Id: b857c99c-04e9-4801-bce6-27bfc535ac84
	I0210 12:22:12.541623    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:12.541970    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:13.037028    5644 type.go:168] "Request Body" body=""
	I0210 12:22:13.037028    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:13.037028    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:13.037028    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:13.037028    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:13.040595    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:13.040595    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:13.040595    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:13.040595    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:13 GMT
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Audit-Id: d2adab42-9e63-4d5a-ae30-64f73b3d8ae9
	I0210 12:22:13.040815    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:13.537795    5644 type.go:168] "Request Body" body=""
	I0210 12:22:13.537795    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:13.537795    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:13.537795    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:13.537795    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:13.542094    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:13.542185    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:13.542221    5644 round_trippers.go:587]     Audit-Id: c3b97da1-db73-4f77-bb86-4bdad48f1504
	I0210 12:22:13.542221    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:13.542246    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:13.542246    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:13.542268    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:13.542268    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:13 GMT
	I0210 12:22:13.542415    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:14.037395    5644 type.go:168] "Request Body" body=""
	I0210 12:22:14.037395    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:14.037395    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:14.037395    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:14.037395    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:14.042533    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:14.042533    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:14.042533    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:14.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:14.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:14.042533    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:14 GMT
	I0210 12:22:14.042643    5644 round_trippers.go:587]     Audit-Id: 6d6d4848-78ca-4808-a6e0-933668f77058
	I0210 12:22:14.042643    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:14.043257    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:14.043413    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:14.538012    5644 type.go:168] "Request Body" body=""
	I0210 12:22:14.538163    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:14.538163    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:14.538239    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:14.538239    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:14.542340    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:14.542340    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:14.542340    5644 round_trippers.go:587]     Audit-Id: 7c200f36-8738-4dd9-8c5d-25f5c1a95819
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:14.542433    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:14.542433    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:14 GMT
	I0210 12:22:14.542758    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:15.038288    5644 type.go:168] "Request Body" body=""
	I0210 12:22:15.038438    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:15.038438    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:15.038438    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:15.038438    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:15.042063    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:15.042164    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:15.042164    5644 round_trippers.go:587]     Audit-Id: 2750ee7f-c29d-4daf-b6eb-31ed6d57d32b
	I0210 12:22:15.042164    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:15.042231    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:15.042231    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:15.042231    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:15.042231    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:15 GMT
	I0210 12:22:15.042505    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:15.537427    5644 type.go:168] "Request Body" body=""
	I0210 12:22:15.537427    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:15.537427    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:15.537427    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:15.537427    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:15.541771    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:15.541771    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:15.541771    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:15.541771    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:15 GMT
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Audit-Id: 92ca9d5c-3a1a-49c4-8fd7-a540874a2a53
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:15.542188    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.037861    5644 type.go:168] "Request Body" body=""
	I0210 12:22:16.037861    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:16.037861    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:16.037861    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:16.037861    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:16.042323    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:16.042323    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Audit-Id: 818e3d1b-fbcd-4f8f-ba31-c78b37fa4bde
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:16.042323    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:16.042323    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:16 GMT
	I0210 12:22:16.042852    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.537060    5644 type.go:168] "Request Body" body=""
	I0210 12:22:16.537060    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:16.537060    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:16.537060    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:16.537060    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:16.541283    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:16.541391    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Audit-Id: f8d5376f-326e-41af-addf-93a857cc2b02
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:16.541391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:16.541391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:16 GMT
	I0210 12:22:16.541711    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.541880    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:17.037349    5644 type.go:168] "Request Body" body=""
	I0210 12:22:17.037349    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:17.037349    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:17.037349    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:17.037349    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:17.041240    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:17.041240    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:17.041298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:17 GMT
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Audit-Id: aacc6765-a652-46c9-b844-a154ec168641
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:17.041298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:17.041581    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:17.537350    5644 type.go:168] "Request Body" body=""
	I0210 12:22:17.537350    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:17.537350    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:17.537350    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:17.537350    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:17.545459    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:17.545514    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:17.545646    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:17.545710    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:17 GMT
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Audit-Id: 8529d8ad-c44c-4dfc-ae39-926238206648
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:17.545971    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.037400    5644 type.go:168] "Request Body" body=""
	I0210 12:22:18.037400    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:18.037400    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:18.037400    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:18.037400    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:18.041455    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:18.041455    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Audit-Id: 964fa4d0-8da8-4139-8dec-f0e683b27fa6
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:18.041528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:18.041528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:18 GMT
	I0210 12:22:18.041931    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.537832    5644 type.go:168] "Request Body" body=""
	I0210 12:22:18.537956    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:18.538024    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:18.538024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:18.538024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:18.541347    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:18.541821    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:18.541821    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:18.541821    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:18.541821    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:18.541881    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:18.541881    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:18 GMT
	I0210 12:22:18.541881    5644 round_trippers.go:587]     Audit-Id: c1db960f-b93c-4927-8c65-367d732effde
	I0210 12:22:18.542088    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.542088    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:19.037607    5644 type.go:168] "Request Body" body=""
	I0210 12:22:19.037607    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:19.037607    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:19.037607    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:19.037607    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:19.042265    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:19.042343    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Audit-Id: 22d1da5b-b131-486a-a44b-1503026eeeea
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:19.042343    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:19.042343    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:19.042531    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:19 GMT
	I0210 12:22:19.043634    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:19.536964    5644 type.go:168] "Request Body" body=""
	I0210 12:22:19.536964    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:19.536964    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:19.536964    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:19.536964    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:19.542503    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:19.542503    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Audit-Id: 6d7fc6de-9144-4d89-9585-18698077d2be
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:19.542503    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:19.542503    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:19 GMT
	I0210 12:22:19.542503    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.037821    5644 type.go:168] "Request Body" body=""
	I0210 12:22:20.037821    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:20.037821    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:20.037821    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:20.037821    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:20.042114    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:20.042114    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:20 GMT
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Audit-Id: 756ac361-dd50-436f-8ef8-da2f281dfaff
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:20.042466    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:20.042466    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:20.043161    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.537618    5644 type.go:168] "Request Body" body=""
	I0210 12:22:20.537618    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:20.537618    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:20.537618    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:20.537618    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:20.541721    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:20.541781    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:20.541781    5644 round_trippers.go:587]     Audit-Id: 90aa441a-726e-43c6-b49f-d4c2b93778b5
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:20.541836    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:20.541836    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:20 GMT
	I0210 12:22:20.542164    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.542378    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:21.037240    5644 type.go:168] "Request Body" body=""
	I0210 12:22:21.037240    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:21.037240    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:21.037240    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:21.037240    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:21.047865    5644 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0210 12:22:21.047865    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Audit-Id: abb55c04-108e-4c4c-b34b-4d93f07def94
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:21.047970    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:21.047970    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:21 GMT
	I0210 12:22:21.048278    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:21.537900    5644 type.go:168] "Request Body" body=""
	I0210 12:22:21.537900    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:21.537900    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:21.537900    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:21.538345    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:21.542100    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:21.542100    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Audit-Id: 86da1d43-daf3-4847-91ba-950425c84756
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:21.542100    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:21.542100    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:21 GMT
	I0210 12:22:21.542100    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:22.037061    5644 type.go:168] "Request Body" body=""
	I0210 12:22:22.037061    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:22.037061    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:22.037061    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:22.037061    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:22.041707    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:22.041707    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:22 GMT
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Audit-Id: 360010ee-a35e-4818-8297-785b723e51ca
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:22.041813    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:22.041813    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:22.041813    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:22.042064    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:22.537000    5644 type.go:168] "Request Body" body=""
	I0210 12:22:22.537662    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:22.537662    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:22.537662    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:22.537662    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:22.540879    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:22.540879    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:22.540879    5644 round_trippers.go:587]     Audit-Id: 8e412ace-a303-4296-a06f-36240bc53dfe
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:22.541004    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:22.541004    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:22 GMT
	I0210 12:22:22.541307    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:23.037750    5644 type.go:168] "Request Body" body=""
	I0210 12:22:23.038305    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:23.038305    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:23.038305    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:23.038305    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:23.043899    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:23.043899    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Audit-Id: e84ea73a-a74a-44a0-bd4e-1e8137d1e313
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:23.043899    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:23.043899    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:23 GMT
	I0210 12:22:23.044518    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:23.044712    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:23.537694    5644 type.go:168] "Request Body" body=""
	I0210 12:22:23.537983    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:23.537983    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:23.538038    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:23.538038    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:23.544944    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:23.544944    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Audit-Id: 5cbdbc7d-9d94-47e0-a66f-819cb19047f2
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:23.545486    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:23.545486    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:23.545486    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:23 GMT
	I0210 12:22:23.545679    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:24.037541    5644 type.go:168] "Request Body" body=""
	I0210 12:22:24.037541    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:24.037541    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:24.037541    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:24.037541    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:24.042357    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:24.042357    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:24.042473    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:24 GMT
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Audit-Id: ba0a73dc-0f6f-4a23-8433-fc0f4ae3075d
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:24.042473    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:24.042968    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:24.537630    5644 type.go:168] "Request Body" body=""
	I0210 12:22:24.537662    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:24.537662    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:24.537662    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:24.537662    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:24.540787    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:24.540787    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:24.540787    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:24 GMT
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Audit-Id: 6b7e975c-d22c-40d3-b5ef-ff7b55e695ec
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:24.540787    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:24.540787    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.037769    5644 type.go:168] "Request Body" body=""
	I0210 12:22:25.037769    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:25.037769    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:25.038260    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:25.038260    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:25.041435    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:25.041516    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Audit-Id: b07d609b-b54c-4c08-9d53-63932d8aef92
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:25.041516    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:25.041516    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:25.041636    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:25 GMT
	I0210 12:22:25.041927    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.538035    5644 type.go:168] "Request Body" body=""
	I0210 12:22:25.538227    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:25.538317    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:25.538317    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:25.538317    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:25.545947    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:25.545947    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Audit-Id: 9eac72a2-04d8-48e2-b969-62d1dfd99cc2
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:25.545947    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:25.545947    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:25 GMT
	I0210 12:22:25.546517    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.546517    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:26.037380    5644 type.go:168] "Request Body" body=""
	I0210 12:22:26.037380    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:26.037380    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:26.037380    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:26.037380    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:26.041815    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:26.041815    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Audit-Id: 75af28c9-307b-4bbd-bd14-a495eb34b1c8
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:26.041815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:26.041815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:26 GMT
	I0210 12:22:26.041815    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:26.537304    5644 type.go:168] "Request Body" body=""
	I0210 12:22:26.537304    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:26.537304    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:26.537304    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:26.537304    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:26.542133    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:26.542133    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:26.542244    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:26 GMT
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Audit-Id: 3cad3847-4738-44d0-ba54-390dcbf6b9f4
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:26.542244    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:26.542591    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:27.037083    5644 type.go:168] "Request Body" body=""
	I0210 12:22:27.037083    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:27.037083    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:27.037083    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:27.037083    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:27.040620    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:27.040733    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:27.040733    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:27 GMT
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Audit-Id: dcd8b849-3281-448a-a1ec-28b19418355e
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:27.040815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:27.041127    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:27.537802    5644 type.go:168] "Request Body" body=""
	I0210 12:22:27.537802    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:27.537802    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:27.537802    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:27.537802    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:27.542165    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:27.542238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:27.542238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:27.542238    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:27 GMT
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Audit-Id: 58239b8e-8664-4779-9935-d3a6afa98b92
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:27.542302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:27.542603    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:28.037187    5644 type.go:168] "Request Body" body=""
	I0210 12:22:28.037187    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:28.037187    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:28.037187    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:28.037187    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:28.040452    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:28.040452    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:28 GMT
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Audit-Id: c1db4641-6a62-4ac0-9357-d739c314e423
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:28.040452    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:28.040452    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:28.041572    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:28.041750    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:28.537828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:28.537828    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:28.537828    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:28.537828    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:28.537828    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:28.542245    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:28.542245    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:28.542313    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:28.542328    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:28 GMT
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Audit-Id: 0b135244-23b0-4b9d-91b0-a745a9a40f1a
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:28.542660    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:29.037264    5644 type.go:168] "Request Body" body=""
	I0210 12:22:29.037264    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:29.037264    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:29.037264    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:29.037264    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:29.041314    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:29.041391    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:29 GMT
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Audit-Id: 63985d8b-91b2-436d-ab86-733129b37320
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:29.041391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:29.041391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:29.041567    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:29.537573    5644 type.go:168] "Request Body" body=""
	I0210 12:22:29.537573    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:29.537573    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:29.537573    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:29.537573    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:29.542181    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:29.542255    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Audit-Id: d549fcfe-aaea-46bb-8d97-fda04801b3ee
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:29.542255    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:29.542255    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:29.542322    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:29 GMT
	I0210 12:22:29.542549    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:30.038037    5644 type.go:168] "Request Body" body=""
	I0210 12:22:30.038037    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:30.038037    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:30.038037    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:30.038037    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:30.042566    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:30.042566    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:30.042566    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:30.042566    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:30 GMT
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Audit-Id: df4403b3-bdc9-4a4a-938d-11696385d0ee
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:30.042757    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:30.042757    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:30.537957    5644 type.go:168] "Request Body" body=""
	I0210 12:22:30.537957    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:30.537957    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:30.537957    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:30.537957    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:30.542931    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:30.542931    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:30.542931    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:30.542931    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:30 GMT
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Audit-Id: 4d896a63-5499-4e08-abb7-abbc3acd6d3a
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:30.543473    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:31.037644    5644 type.go:168] "Request Body" body=""
	I0210 12:22:31.037644    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:31.037644    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:31.037644    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:31.037644    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:31.042318    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:31.042398    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Audit-Id: 1ea817b9-8a94-4955-96d0-1a40f8cf3613
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:31.042479    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:31.042479    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:31.042479    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:31 GMT
	I0210 12:22:31.042639    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:31.537798    5644 type.go:168] "Request Body" body=""
	I0210 12:22:31.537798    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:31.537798    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:31.537798    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:31.537798    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:31.541166    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:31.542091    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:31.542091    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:31 GMT
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Audit-Id: 185afa78-aaeb-4f78-8bba-c5388c0e3d2d
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:31.542091    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:31.543006    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:32.038140    5644 type.go:168] "Request Body" body=""
	I0210 12:22:32.038245    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:32.038353    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:32.038353    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:32.038353    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:32.041808    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:32.041808    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:32.042808    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:32.042808    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:32 GMT
	I0210 12:22:32.042854    5644 round_trippers.go:587]     Audit-Id: 30de632f-ee6c-4b0a-adb9-e93b632052bd
	I0210 12:22:32.042854    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:32.042901    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:32.042901    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:32.043464    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:32.043680    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:32.537667    5644 type.go:168] "Request Body" body=""
	I0210 12:22:32.537774    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:32.537774    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:32.537774    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:32.537774    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:32.543980    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:32.543980    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:32.543980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:32.543980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:32 GMT
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Audit-Id: ef39631a-b175-4a86-869e-898b7d179789
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:32.544692    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:33.037489    5644 type.go:168] "Request Body" body=""
	I0210 12:22:33.037489    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:33.037489    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:33.037489    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:33.037489    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:33.045317    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:33.045866    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Audit-Id: 6d4fb724-c763-48a0-a607-d4de82ea5b42
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:33.045866    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:33.045866    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:33 GMT
	I0210 12:22:33.046284    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:33.537949    5644 type.go:168] "Request Body" body=""
	I0210 12:22:33.538184    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:33.538276    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:33.538276    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:33.538276    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:33.541564    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:33.542389    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:33.542389    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:33.542389    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:33 GMT
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Audit-Id: b259efe3-76c6-422a-8075-cfb674e6bb96
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:33.542624    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.037548    5644 type.go:168] "Request Body" body=""
	I0210 12:22:34.037966    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:34.037966    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:34.038055    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:34.038156    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:34.042708    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:34.042708    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Audit-Id: 1b200805-74ed-4ac0-a833-d0608c52db40
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:34.042708    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:34.042826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:34.042826    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:34 GMT
	I0210 12:22:34.043047    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.537536    5644 type.go:168] "Request Body" body=""
	I0210 12:22:34.537536    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:34.537536    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:34.537536    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:34.537536    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:34.541398    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:34.541607    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:34.541607    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:34.541607    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:34 GMT
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Audit-Id: a98ad919-61fc-4001-b125-da23b15c7c46
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:34.541965    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.542145    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:35.038020    5644 type.go:168] "Request Body" body=""
	I0210 12:22:35.038020    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:35.038020    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:35.038020    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:35.038020    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:35.042248    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:35.042641    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:35.042641    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:35.042641    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:35 GMT
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Audit-Id: 7e5b8f17-86f0-49d7-a8e8-72a65436117a
	I0210 12:22:35.042961    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:35.537446    5644 type.go:168] "Request Body" body=""
	I0210 12:22:35.538027    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:35.538098    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:35.538098    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:35.538156    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:35.541834    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:35.541940    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:35 GMT
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Audit-Id: b348d649-13f8-4b62-add7-942da6439f22
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:35.542006    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:35.542006    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:35.542235    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.037730    5644 type.go:168] "Request Body" body=""
	I0210 12:22:36.039533    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:36.039533    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:36.039533    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:36.039533    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:36.043130    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:36.043727    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:36 GMT
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Audit-Id: 2f94fc9e-81e1-413f-920d-e5f53402577d
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:36.043727    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:36.043804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:36.044095    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.538286    5644 type.go:168] "Request Body" body=""
	I0210 12:22:36.538414    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:36.538414    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:36.538476    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:36.538476    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:36.542513    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:36.542513    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:36.542513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:36.542513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:36 GMT
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Audit-Id: 00b408ee-10d7-4a08-9728-c430f3099082
	I0210 12:22:36.543547    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.543748    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:37.038150    5644 type.go:168] "Request Body" body=""
	I0210 12:22:37.038264    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:37.038356    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:37.038356    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:37.038394    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:37.042142    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:37.042142    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:37.042142    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:37.042142    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:37 GMT
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Audit-Id: 324c1936-6098-4f03-9848-abaa30412438
	I0210 12:22:37.042233    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:37.042501    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:37.537176    5644 type.go:168] "Request Body" body=""
	I0210 12:22:37.537176    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:37.537176    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:37.537176    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:37.537176    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:37.541297    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:37.541370    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:37.541370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:37 GMT
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Audit-Id: a07c47e4-dd19-468a-8e26-731a12389cad
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:37.541447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:37.541824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.037869    5644 type.go:168] "Request Body" body=""
	I0210 12:22:38.038024    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:38.038024    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:38.038024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:38.038085    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:38.042275    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:38.042275    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Audit-Id: 08637112-ed03-49a0-97e3-9d82b06e1933
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:38.042275    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:38.042275    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:38 GMT
	I0210 12:22:38.042275    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.538293    5644 type.go:168] "Request Body" body=""
	I0210 12:22:38.538493    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:38.538493    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:38.538493    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:38.538493    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:38.544826    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:38.544826    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:38.544826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:38 GMT
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Audit-Id: c76b9185-9297-4dbc-94fe-165ee98ed0ce
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:38.544826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:38.545791    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.545791    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:39.038315    5644 type.go:168] "Request Body" body=""
	I0210 12:22:39.038408    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:39.038408    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:39.038408    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:39.038408    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:39.041816    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:39.042485    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:39.042485    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:39.042485    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:39 GMT
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Audit-Id: f8e4d214-bf79-4dff-8b3a-d5b03952e390
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:39.042823    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:39.537230    5644 type.go:168] "Request Body" body=""
	I0210 12:22:39.537230    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:39.537230    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:39.537230    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:39.537230    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:39.541650    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:39.541745    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Audit-Id: 8df6bf4f-e049-4efb-b620-5818af6075ab
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:39.541745    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:39.541809    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:39.541809    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:39 GMT
	I0210 12:22:39.541809    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:40.038037    5644 type.go:168] "Request Body" body=""
	I0210 12:22:40.038037    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:40.038037    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:40.038037    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:40.038037    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:40.042458    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:40.042458    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:40.042458    5644 round_trippers.go:587]     Audit-Id: e5b2aa00-1559-4026-bdc6-4d8f4793a8de
	I0210 12:22:40.042458    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:40.042539    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:40.042539    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:40.042539    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:40.042539    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:40 GMT
	I0210 12:22:40.042762    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:40.537553    5644 type.go:168] "Request Body" body=""
	I0210 12:22:40.537553    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:40.537553    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:40.537553    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:40.537553    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:40.541762    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:40.541762    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:40 GMT
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Audit-Id: 9256173d-8092-49f4-8be6-389fce44fb1f
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:40.541762    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:40.541762    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:40.542152    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:41.038122    5644 type.go:168] "Request Body" body=""
	I0210 12:22:41.038122    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:41.038122    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:41.038122    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:41.038122    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:41.045361    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:41.045361    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Audit-Id: 50b6f874-b318-4012-86ba-1fa578d6b6c2
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:41.045361    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:41.045361    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:41 GMT
	I0210 12:22:41.045909    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:41.046106    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:41.537546    5644 type.go:168] "Request Body" body=""
	I0210 12:22:41.538013    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:41.538090    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:41.538090    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:41.538090    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:41.543564    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:41.543564    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:41 GMT
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Audit-Id: 74f8823b-1aed-42e8-aa17-8052e866a7e0
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:41.543564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:41.543564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:41.544232    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:42.038352    5644 type.go:168] "Request Body" body=""
	I0210 12:22:42.038464    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:42.038464    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:42.038464    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:42.038464    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:42.041806    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:42.042623    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:42.042623    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:42.042623    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:42 GMT
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Audit-Id: ae0c1ec9-1df4-4a4b-84fc-c2823acc3696
	I0210 12:22:42.043018    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:42.537193    5644 type.go:168] "Request Body" body=""
	I0210 12:22:42.537933    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:42.537971    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:42.538006    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:42.538024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:42.541318    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:42.541318    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:42.541318    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:42.541318    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:42.541715    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:42.541715    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:42.541715    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:42 GMT
	I0210 12:22:42.541715    5644 round_trippers.go:587]     Audit-Id: 08f4b765-fbfd-4d80-9d64-baad6a133a42
	I0210 12:22:42.542158    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.037603    5644 type.go:168] "Request Body" body=""
	I0210 12:22:43.037603    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:43.037603    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:43.037603    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:43.037603    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:43.040513    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:43.040513    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:43.040513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:43.040513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:43 GMT
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Audit-Id: 736c74f5-eed9-45f6-80d5-67e0b90e682b
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:43.041630    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.537209    5644 type.go:168] "Request Body" body=""
	I0210 12:22:43.537209    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:43.537209    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:43.537209    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:43.537209    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:43.541820    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:43.541820    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Audit-Id: c80acae4-89e0-41fa-b8c8-fc585ea7400c
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:43.541930    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:43.541930    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:43 GMT
	I0210 12:22:43.542327    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.542666    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:44.037731    5644 type.go:168] "Request Body" body=""
	I0210 12:22:44.037918    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:44.037918    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:44.037918    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:44.037918    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:44.041613    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:44.041613    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:44 GMT
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Audit-Id: f8d589ff-a4ab-4dec-bebe-017fed68bed8
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:44.041705    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:44.041705    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:44.042101    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:44.538421    5644 type.go:168] "Request Body" body=""
	I0210 12:22:44.538599    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:44.538599    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:44.538599    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:44.538599    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:44.546276    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:44.546276    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:44.546276    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:44.546276    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:44.546360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:44.546360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:44.546360    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:44 GMT
	I0210 12:22:44.546360    5644 round_trippers.go:587]     Audit-Id: 0a84a50a-bfb8-456e-9a42-5565194a13d2
	I0210 12:22:44.546735    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.037979    5644 type.go:168] "Request Body" body=""
	I0210 12:22:45.037979    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:45.037979    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:45.037979    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:45.037979    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:45.042441    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:45.042441    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:45.042441    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:45.042441    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:45.042523    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:45.042523    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:45.042523    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:45 GMT
	I0210 12:22:45.042523    5644 round_trippers.go:587]     Audit-Id: 2876f365-fe91-4c06-a097-143595036448
	I0210 12:22:45.042797    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.537552    5644 type.go:168] "Request Body" body=""
	I0210 12:22:45.537724    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:45.537724    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:45.537724    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:45.537724    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:45.544834    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:45.544900    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:45.544900    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:45 GMT
	I0210 12:22:45.544900    5644 round_trippers.go:587]     Audit-Id: de15bf11-9f33-4b61-8f8e-dae7f65222e1
	I0210 12:22:45.544936    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:45.544936    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:45.544936    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:45.544936    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:45.544936    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.544936    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:46.038592    5644 type.go:168] "Request Body" body=""
	I0210 12:22:46.038686    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:46.038686    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:46.038686    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:46.038758    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:46.045833    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:46.045891    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:46.045891    5644 round_trippers.go:587]     Audit-Id: 22b83ad1-4655-4357-bfc0-67723344666e
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:46.045955    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:46.045955    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:46 GMT
	I0210 12:22:46.046075    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:46.538346    5644 type.go:168] "Request Body" body=""
	I0210 12:22:46.538346    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:46.538346    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:46.538346    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:46.538346    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:46.542590    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:46.542590    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Audit-Id: 71487f7e-6aee-4ea5-bbfa-03582cfdc264
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:46.542655    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:46.542655    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:46 GMT
	I0210 12:22:46.543316    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.037290    5644 type.go:168] "Request Body" body=""
	I0210 12:22:47.037290    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:47.037290    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:47.037290    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:47.037290    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:47.042399    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:47.042510    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Audit-Id: 34469633-9573-4d58-b5cf-f33bc9097855
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:47.042510    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:47.042510    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:47 GMT
	I0210 12:22:47.043245    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.538311    5644 type.go:168] "Request Body" body=""
	I0210 12:22:47.538311    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:47.538311    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:47.538311    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:47.538311    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:47.544441    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:47.544441    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:47.544441    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:47 GMT
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Audit-Id: bf7cb7ee-db27-455f-a890-2a4269325633
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:47.544441    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:47.545393    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.545393    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:48.038059    5644 type.go:168] "Request Body" body=""
	I0210 12:22:48.038059    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:48.038059    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:48.038059    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:48.038059    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:48.042401    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:48.042533    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Audit-Id: 3052dc70-0769-40dd-92e4-cf3b0730091c
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:48.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:48.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:48 GMT
	I0210 12:22:48.042888    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:48.538057    5644 type.go:168] "Request Body" body=""
	I0210 12:22:48.538057    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:48.538057    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:48.538057    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:48.538057    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:48.541126    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:48.541126    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:48 GMT
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Audit-Id: baab82d5-1f0c-47eb-aa1e-60ff3f340388
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:48.542138    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:48.542190    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:48.542527    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:49.037731    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.037731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.037731    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.037731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.037731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.041772    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.041772    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Audit-Id: e583d5ed-8eac-4e4b-8597-f08ef5bea8fb
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.041772    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.041772    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.041772    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:49.041772    5644 node_ready.go:49] node "multinode-032400" has status "Ready":"True"
	I0210 12:22:49.041772    5644 node_ready.go:38] duration metric: took 45.5046482s for node "multinode-032400" to be "Ready" ...
	I0210 12:22:49.041772    5644 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:49.041772    5644 type.go:204] "Request Body" body=""
	I0210 12:22:49.041772    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:49.041772    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.041772    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.041772    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.045746    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.045746    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.045746    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.045746    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Audit-Id: 51a428a7-6d80-4d58-a291-a6bb3efdf1bd
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.047716    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9d ea 03 0a  0a 0a 00 12 04 31 39 33  |ist..........193|
		00000020  31 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |1....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308964 chars]
	 >
	I0210 12:22:49.047716    5644 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:49.048716    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.048716    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:49.048716    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.048716    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.048716    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.051878    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.051878    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Audit-Id: f7c36739-a3f4-427e-95d0-313e3de1124f
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.051953    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.051953    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.051953    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:49.052662    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.052882    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.052882    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.052882    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.052882    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.055662    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.055662    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.055662    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Audit-Id: 0eaab311-c4e2-4518-ad27-0efa5f1e56e2
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.055662    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.055662    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:49.547818    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.547818    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:49.547818    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.547818    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.547818    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.551088    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.551949    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.551949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.551949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Audit-Id: cd180aa0-48b2-4d74-9fdc-ac8e27a3130b
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.552441    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:49.552810    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.552810    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.552901    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.552950    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.553029    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.555778    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.555868    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.555868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.555868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Audit-Id: e434e79c-7cfd-4bb2-9a40-f56334afd1a2
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.556326    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:50.048728    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.048728    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:50.048728    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.048728    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.048728    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.053262    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:50.053327    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.053327    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Audit-Id: 50236ee3-2b65-4549-bb8b-bf221251506d
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.053327    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.054104    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:50.054401    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.054401    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:50.054401    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.054401    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.054401    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.056950    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:50.057868    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Audit-Id: 325b7450-d66e-40ef-8966-98729cb385ef
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.057868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.057868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.058132    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:50.548469    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.548469    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:50.548469    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.548469    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.548469    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.551804    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:50.552484    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Audit-Id: a6f47eff-209d-4cde-9d78-2ec52d3b93f7
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.552484    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.552484    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.552824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:50.553074    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.553189    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:50.553189    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.553237    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.553237    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.556155    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:50.556155    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.556155    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.556155    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.556259    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Audit-Id: 3ca49763-c220-4134-8a23-03acbdc88f2b
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.556519    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:51.047943    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.047943    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:51.047943    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.047943    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.047943    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.052856    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:51.052856    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.052856    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Audit-Id: ec3e6855-d536-4882-b2f2-b41cefd1df45
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.052856    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.052856    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:51.053540    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.053540    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:51.053540    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.053540    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.053540    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.056322    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:51.056322    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.056322    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.056322    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.056322    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Audit-Id: cf9f53bc-e6a3-4bc6-ae66-08dc5875524c
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.056488    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:51.056488    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:51.548592    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.548592    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:51.548592    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.548592    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.548592    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.560667    5644 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0210 12:22:51.560667    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.560667    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Audit-Id: 7cd86dcc-ab75-4a50-8269-3de1253d26c6
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.560667    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.561741    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:51.561910    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.561910    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:51.561910    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.561910    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.561910    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.573072    5644 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 12:22:51.573072    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Audit-Id: bc09745c-2163-48c1-8570-c0891179443f
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.573072    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.573072    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.573641    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:52.048778    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.048778    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:52.048778    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.048778    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.048778    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.053261    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:52.053653    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.053653    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.053653    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Audit-Id: 0d3a5b1e-7779-48b7-8210-3f6bbf191e31
	I0210 12:22:52.054149    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:52.054383    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.054460    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:52.054460    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.054499    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.054499    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.057162    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:22:52.057195    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Audit-Id: bb2559e6-bae0-454d-9d8b-c46a78e46dc0
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.057195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.057195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.057265    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.057483    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:52.547895    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.547895    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:52.547895    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.547895    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.547895    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.552832    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:52.552923    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Audit-Id: d3f32117-3726-4f3f-a5b4-6d2ead8d7478
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.552949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.552949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.553353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:52.553582    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.553656    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:52.553656    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.553749    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.553769    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.556486    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:52.556553    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.556553    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Audit-Id: cf82b9bc-3a7e-4212-8ccb-65aba06cd7ef
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.556642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.557341    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:53.048002    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.048002    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:53.048002    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.048002    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.048002    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.053287    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:53.053287    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.053355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Audit-Id: 941709c7-bd59-4426-b3df-bb6130caf07e
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.053355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.053760    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:53.054025    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.054061    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:53.054124    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.054124    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.054124    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.056564    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:53.056613    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Audit-Id: a5898114-0d8d-4787-9996-90dd194192bd
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.056613    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.056613    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.056888    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:53.057050    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:53.548518    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.548518    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:53.548518    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.548518    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.548518    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.553136    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:53.553136    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Audit-Id: a87d1560-5727-485c-aff9-f9fa53960d7e
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.553136    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.553136    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.553505    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:53.554943    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.555356    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:53.555356    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.555356    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.555356    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.558674    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:53.558747    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Audit-Id: 18dfbd31-50ca-4d25-b94b-6a180dbb33dc
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.558828    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.558828    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.558828    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.559705    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:54.048397    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.048397    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:54.048397    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.048397    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.048397    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.052724    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:54.052724    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.052724    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Audit-Id: 11702046-585b-47c8-a970-b22b1af77eda
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.052724    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.053353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:54.053629    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.053712    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:54.053742    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.053742    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.053742    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.058130    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:54.058130    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.058130    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.058130    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Audit-Id: a7bbd6d3-cc44-4be7-a707-6c9887e98b2a
	I0210 12:22:54.058130    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:54.548128    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.548128    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:54.548128    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.548128    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.548128    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.553247    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:54.553331    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.553331    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.553331    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Audit-Id: ff8245bf-8b8e-4b2f-8f9b-7acfc27ef03c
	I0210 12:22:54.553954    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:54.554252    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.554252    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:54.554252    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.554252    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.554375    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.556324    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:22:54.556324    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.556324    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.556324    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Audit-Id: c422f000-6efc-4b95-9a2f-568d736b56a3
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.557329    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.048165    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.048721    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:55.048721    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.048721    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.048721    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.052827    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:55.052827    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.052827    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Audit-Id: 6a173839-7b42-4db0-8d04-966ee075559b
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.052827    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.052827    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:55.053487    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.053487    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:55.053487    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.053487    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.053487    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.056758    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:55.056838    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.056838    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.056838    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.056838    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Audit-Id: 0e7a36b8-eb8c-476a-b48d-32cc4fa00c25
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.056992    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.548564    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.548705    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:55.548705    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.548705    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.548705    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.551788    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:55.551788    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Audit-Id: 72f0ef6b-d7e6-4d60-94d6-9bd3b5e71b08
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.551920    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.551920    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.551920    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.552241    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:55.552402    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.552533    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:55.552533    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.552533    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.552533    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.555617    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:55.555617    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.555617    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Audit-Id: 2b0c960f-80d1-4151-97fa-06ee88e575cd
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.555617    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.556004    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.556214    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:56.048268    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.048493    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:56.048493    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.048493    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.048548    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.052471    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:56.052471    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.052471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.052471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Audit-Id: e4d04e08-f464-48bd-9184-fc8d9a02e2e0
	I0210 12:22:56.053010    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:56.053305    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.053408    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:56.053408    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.053510    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.053510    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.059025    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:56.059025    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Audit-Id: 5fecaf1f-5ea9-4e44-a5c9-d4e55218729b
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.059025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.059025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.059025    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:56.548852    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.548852    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:56.548852    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.548852    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.548852    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.552997    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:56.553378    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Audit-Id: f8eebaa7-b4cd-4053-a554-3407b8747904
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.553378    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.553462    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.553462    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.553462    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:56.553994    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.554090    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:56.554090    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.554090    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.554173    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.556987    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:56.557178    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Audit-Id: 76d2bd6d-dff9-4b85-a471-91e958692745
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.557245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.557287    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.557287    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.557780    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.048561    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.048561    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:57.048561    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.048561    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.048561    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.053397    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:57.053471    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Audit-Id: c9ee8187-9b90-4249-9930-9dc98bee14f3
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.053471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.053471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.053770    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:57.053770    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.053770    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:57.053770    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.053770    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.053770    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.060082    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:57.060082    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.060082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.060082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Audit-Id: 7b658c2b-59d9-49a4-b489-11e2acd32642
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.060627    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.548244    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.548829    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:57.548829    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.548892    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.548892    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.551234    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:57.552298    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Audit-Id: 89a6055e-8009-46d9-b659-55794830e9c5
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.552298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.552298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.552638    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:57.552890    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.552953    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:57.552953    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.553021    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.553021    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.555898    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:57.555993    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.555993    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.555993    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Audit-Id: e9d4323b-d339-4370-88b0-a040f9a9903d
	I0210 12:22:57.557164    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.557164    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:58.047965    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.047965    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:58.047965    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.047965    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.047965    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.051892    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.051892    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Audit-Id: ec6cabde-3121-4a45-8a7e-6c91844d80d4
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.051892    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.051892    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.052626    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:58.052828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.052828    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:58.052828    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.052828    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.052828    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.056507    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.056507    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.056606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Audit-Id: 042a794d-a59f-43c1-ad36-0eab8040b31e
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.056606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.056881    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:58.548014    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.548014    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:58.548014    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.548014    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.548014    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.551055    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.551914    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Audit-Id: 15462687-af7f-4aca-a9f2-e67b0a0d33b3
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.551914    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.551914    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.552240    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:58.552500    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.552576    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:58.552576    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.552576    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.552576    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.554947    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:58.554947    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.554947    5644 round_trippers.go:587]     Audit-Id: 973ac095-f1db-4267-b09b-3d3eb67ca1d1
	I0210 12:22:58.554947    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.555803    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.555803    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.555803    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.555803    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.556127    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.048191    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.048191    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:59.048191    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.048191    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.048191    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.057777    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:59.057777    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.057777    5644 round_trippers.go:587]     Audit-Id: 41d94b59-f635-4e8f-9d42-c4f47546a84c
	I0210 12:22:59.057777    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.057865    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.057865    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.057865    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.057865    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.057981    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:59.057981    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.057981    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:59.057981    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.057981    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.058506    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.065872    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:59.065872    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.065872    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.065872    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Audit-Id: 3ff03e8f-d0eb-4ddd-a3d9-0b436919fddd
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.065872    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.548084    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.548084    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:59.548084    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.548084    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.548084    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.552528    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:59.552528    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Audit-Id: 85b3ded3-e422-459c-8291-c5bc17372e25
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.552528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.552528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.552528    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:59.553304    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.553377    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:59.553443    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.553443    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.553470    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.557082    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:59.557082    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.557082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.557082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.557161    5644 round_trippers.go:587]     Audit-Id: 7367d7c5-a831-4f7a-8085-f31c40017038
	I0210 12:22:59.557466    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.557635    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:00.048431    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.048957    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:00.049043    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.049043    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.049043    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.053272    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:00.053272    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.053272    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.053272    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Audit-Id: b2924ee7-ac41-44d2-bee2-c53b53148e28
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.053665    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:00.053854    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.053976    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:00.053976    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.053976    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.053976    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.057437    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:00.057437    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Audit-Id: 44e1a383-1ee2-4e91-bd0e-33563e002b77
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.057528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.057528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.057619    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:00.548490    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.548490    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:00.548490    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.548490    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.548490    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.552750    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:00.553071    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Audit-Id: ba0711d8-a55a-43eb-9f6c-b9cba11b3cbb
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.553071    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.553071    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.553426    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:00.553802    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.553802    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:00.553802    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.553802    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.553915    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.556367    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:00.556367    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Audit-Id: d80e7295-a8f4-4061-9209-94ac3405dd42
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.556367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.556367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.556777    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.048073    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.048073    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:01.048073    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.048073    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.048073    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.051486    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.052195    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Audit-Id: 99f790b5-a737-4068-8d56-0f309f4051cc
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.052195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.052195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.052773    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:01.052888    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.052888    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:01.052888    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.052888    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.052888    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.056272    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.056408    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Audit-Id: 9f2c0f62-34b9-44ec-996e-997da25a3a0c
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.056408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.056492    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.056492    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.056548    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.549102    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.549102    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:01.549102    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.549102    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.549102    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.553282    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:01.553282    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.553282    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.553282    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Audit-Id: 5c2d8b73-fd91-4bfc-8b62-d239e9d0332c
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.553575    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:01.554223    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.554359    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:01.554359    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.554359    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.554359    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.557961    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.557961    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.558178    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.558178    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Audit-Id: 3589bece-5381-4946-a204-1300d84dbf2f
	I0210 12:23:01.558548    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.558723    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:02.048331    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.048331    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:02.048331    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.048331    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.048331    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.055014    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:23:02.055073    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.055073    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.055073    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.055073    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.055073    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.055134    5644 round_trippers.go:587]     Audit-Id: b165bbca-384a-431b-8830-0452153fef67
	I0210 12:23:02.055134    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.055428    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:02.055731    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.055786    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:02.055786    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.055786    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.055786    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.058025    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:02.058025    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.058025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.058025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Audit-Id: c2bb9363-a8c4-49dd-8e5a-232493e670b0
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.058025    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:02.549395    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.549395    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:02.549395    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.549395    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.549395    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.553814    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:02.553921    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Audit-Id: 99203ebc-3386-4ade-8509-46b2e9dcd4b6
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.554009    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.554009    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.554009    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.554475    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:02.554992    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.555090    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:02.555090    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.555148    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.555148    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.557418    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:02.557418    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.557418    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.558202    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Audit-Id: 185e3041-add6-4644-a906-2309767d4b3b
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.558504    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:03.049113    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.049113    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:03.049113    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.049113    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.049113    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.052809    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:03.052877    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.052877    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Audit-Id: a93c8ec3-a514-47f4-8f37-701e970f4b3e
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.053168    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.053168    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.053168    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:03.053865    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.053865    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:03.053865    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.053865    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.053865    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.056691    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:03.056691    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.056747    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.056747    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Audit-Id: 0820e2fa-eff9-4ab2-a471-23bb1bf57731
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.056773    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:03.549258    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.549258    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:03.549258    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.549258    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.549258    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.553755    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:03.553755    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Audit-Id: 6dfa9630-f0a4-4920-9ff6-d3cf44fb5df0
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.553755    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.553755    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.554306    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:03.554519    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.554590    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:03.554590    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.554590    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.554655    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.558278    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:03.558278    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Audit-Id: 6522ed28-f617-4bec-9aa3-f26b5da41784
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.558278    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.558278    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.558278    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.049143    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.049143    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:04.049143    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.049143    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.049143    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.054911    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:23:04.054978    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.054978    5644 round_trippers.go:587]     Audit-Id: 7886bdef-47bd-418b-9565-9237dfd751b3
	I0210 12:23:04.054978    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.055052    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.055052    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.055052    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.055052    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.055869    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:04.056138    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.056204    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.056289    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.056289    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.056289    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.060207    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.060289    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.060289    5644 round_trippers.go:587]     Audit-Id: 6ca9fb2f-3a05-410f-a192-803bcdeb324f
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.060367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.060367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.061176    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.061418    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:04.548642    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.549246    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:04.549246    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.549246    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.549246    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.556436    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:23:04.556436    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.556436    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.556436    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Audit-Id: d23bfbcf-b351-4eb4-a187-c001aaebdcb4
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.556966    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c5 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 39 37 32 38  |7dbe93e092.19728|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24725 chars]
	 >
	I0210 12:23:04.557200    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.557200    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.557200    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.557200    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.557200    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.561796    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:04.561796    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.561796    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.561796    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Audit-Id: 32ee6243-b96d-4490-8358-1915b6847e1f
	I0210 12:23:04.561796    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.561796    5644 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.561796    5644 pod_ready.go:82] duration metric: took 15.5139075s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.561796    5644 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.561796    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.562900    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:23:04.562900    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.562900    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.562900    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.566121    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.566121    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Audit-Id: bb269c72-e21c-4d2e-9998-0c24d1f25772
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.566121    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.566121    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.566121    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  81 2c 0a 9f 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 37  30 38 00 42 08 08 e6 de  |be02.18708.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26933 chars]
	 >
	I0210 12:23:04.566121    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.567130    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.567194    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.567194    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.567194    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.569515    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.569515    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Audit-Id: a10a6b29-83ed-4a67-9a28-0c583aa4201d
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.569515    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.569515    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.569870    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.570023    5644 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.570023    5644 pod_ready.go:82] duration metric: took 8.2274ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.570023    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.570140    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.570140    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:23:04.570140    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.570140    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.570140    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.573080    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.573080    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.573080    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.573080    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Audit-Id: f1892471-b8b5-4c99-8590-483d84fbdee2
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.573493    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 35 0a af 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 36 36 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8668.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32856 chars]
	 >
	I0210 12:23:04.573728    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.573728    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.573809    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.573809    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.573809    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.576088    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.576088    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.576088    5644 round_trippers.go:587]     Audit-Id: 9f28499a-43a9-464e-9a83-7358b77e06de
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.576670    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.576670    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.576869    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.577066    5644 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.577066    5644 pod_ready.go:82] duration metric: took 7.043ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.577066    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.577066    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.577198    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:23:04.577198    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.577198    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.577244    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.584427    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:23:04.584804    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Audit-Id: 77c86ba3-7518-4d03-b00f-0899ac3c7958
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.584850    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.584850    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.584850    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.584850    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  df 31 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 38 32 38 00 42 08  |9fb4412.18828.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30565 chars]
	 >
	I0210 12:23:04.584850    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.584850    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.584850    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.584850    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.584850    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.587586    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.587586    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Audit-Id: f06425a9-bfc8-4297-8a8c-2eb8e0679e9e
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.587586    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.587586    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.587586    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.588604    5644 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.588604    5644 pod_ready.go:82] duration metric: took 11.537ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.588666    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.588728    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.588784    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:23:04.588784    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.588784    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.588845    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.591026    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.591026    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Audit-Id: 86c73263-78de-48e3-a507-8c865d0a1f99
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.591026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.591026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.591026    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:23:04.591026    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.591026    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.591026    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.592049    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.592049    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.594230    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.594230    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.594230    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.594230    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Audit-Id: 1a201614-2286-469c-a761-ec903516c3ef
	I0210 12:23:04.594787    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.594875    5644 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.594875    5644 pod_ready.go:82] duration metric: took 6.2094ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.594926    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.594926    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.749138    5644 request.go:661] Waited for 154.1407ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:23:04.749138    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:23:04.749138    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.749138    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.749138    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.752889    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.752959    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Audit-Id: e9ee84e5-6b97-4124-b844-d6a9045602da
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.752959    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.752959    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.756913    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:23:04.757099    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.949585    5644 request.go:661] Waited for 192.4835ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:23:04.949585    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:23:04.949991    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.949991    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.950038    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.953804    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.953804    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.953804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.953804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Audit-Id: e18d113b-8d2c-4579-9abc-0d57b2ac43b5
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.953804    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:23:04.954336    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:23:04.954336    5644 pod_ready.go:82] duration metric: took 359.4056ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:23:04.954336    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:23:04.954336    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.954428    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.149324    5644 request.go:661] Waited for 194.8134ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:23:05.149324    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:23:05.149324    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.149324    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.149324    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.153248    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.153248    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.153248    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.153248    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Audit-Id: 5d611965-8565-4d4b-a8e1-3414a6f9670a
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.153767    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 04 31 39 34 31 38  |0d435af832.19418|
		00000070  00 42 08 08 d0 d5 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:23:05.153951    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.349864    5644 request.go:661] Waited for 195.9106ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:23:05.349864    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:23:05.349864    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.349864    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.349864    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.352992    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.352992    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.353979    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.353979    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Content-Length: 4039
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Audit-Id: 45e2b683-0eee-4959-870a-11626cfadfed
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.354441    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 1f 0a f9 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 04 31 39 35 31 38 00  |b7a9af0e2.19518.|
		00000060  42 08 08 d0 d5 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18954 chars]
	 >
	I0210 12:23:05.354610    5644 pod_ready.go:98] node "multinode-032400-m02" hosting pod "kube-proxy-xltxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m02" has status "Ready":"Unknown"
	I0210 12:23:05.354701    5644 pod_ready.go:82] duration metric: took 400.3605ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	E0210 12:23:05.354760    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m02" hosting pod "kube-proxy-xltxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m02" has status "Ready":"Unknown"
	I0210 12:23:05.354760    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:05.354760    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.549024    5644 request.go:661] Waited for 194.2627ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:23:05.549024    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:23:05.549024    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.549024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.549024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.553292    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:05.553412    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Audit-Id: a62493db-9831-47bd-ba0d-430197aabcc9
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.553412    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.553412    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.553709    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ea 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 37 38 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8788.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21728 chars]
	 >
	I0210 12:23:05.553968    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.748898    5644 request.go:661] Waited for 194.8641ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:05.748898    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:05.748898    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.748898    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.748898    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.752583    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.752583    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.752668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.752668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Audit-Id: d68c1b1d-4902-42c5-acde-dcd187448f59
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.752937    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:05.753140    5644 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:05.753140    5644 pod_ready.go:82] duration metric: took 398.3762ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:05.753194    5644 pod_ready.go:39] duration metric: took 16.7112371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:23:05.753194    5644 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:23:05.761213    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:05.786460    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:05.788040    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:05.795134    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:05.820378    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:05.823355    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:05.833034    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:05.859437    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:05.859437    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:05.861148    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:05.868511    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:05.891115    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:05.891115    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:05.891115    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:05.899436    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:05.923653    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:05.924565    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:05.925704    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:05.932215    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:05.959696    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:05.959696    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:05.959696    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:05.967720    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:05.994331    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:05.994405    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:05.994405    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:05.994405    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:05.994477    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:06.022795    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.022795    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.023100    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:06.024399    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:06.024399    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:06.024450    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:06.024483    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.024631    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:06.025813    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:06.025813    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:06.025890    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.025890    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:06.045707    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:06.045707    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:06.395456    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:06.395456    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:06.395875    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.395875    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.395875    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.395930    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:06.395930    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:06.396066    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:06.396126    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.396171    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:06.396171    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:06.396227    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.396227    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.396227    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.396287    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:06.396287    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:06.396287    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.396352    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.396352    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:06.396413    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.396413    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:22:59 +0000
	I0210 12:23:06.396413    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.396478    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:06.396538    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:06.396538    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:06.396602    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:06.396662    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:06.396725    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:06.396785    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.396785    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:06.396785    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:06.396851    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.396851    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.396912    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.396912    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.396912    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.396976    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.396976    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.396976    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.397036    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.397036    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.397101    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.397101    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.397101    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.397101    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:06.397163    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:06.397226    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:06.397226    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.397226    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.397353    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.397414    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.397414    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:06.397414    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:06.397471    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:06.397471    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.397531    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.397594    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:06.397594    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0210 12:23:06.397655    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0210 12:23:06.397718    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0210 12:23:06.397718    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0210 12:23:06.397777    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397842    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397842    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:06.397903    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397966    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.397966    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.397966    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:06.398026    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:06.398090    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:06.398090    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:06.398090    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:06.398150    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:06.398150    5644 command_runner.go:130] > Events:
	I0210 12:23:06.398150    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:06.398215    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:06.398277    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:06.398277    5644 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0210 12:23:06.398341    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398341    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398402    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398466    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398466    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398526    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398526    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.398590    5644 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0210 12:23:06.398590    5644 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:06.398651    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:06.398714    5644 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0210 12:23:06.398714    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.398774    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398837    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398837    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398897    5644 command_runner.go:130] >   Warning  Rebooted                 68s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:06.398960    5644 command_runner.go:130] >   Normal   RegisteredNode           65s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:06.398960    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:06.399020    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:06.399020    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.399020    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.399020    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.399084    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.399212    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:06.399273    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:06.399273    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.399331    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.399392    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.399450    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.399450    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:06.399450    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:06.399511    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:06.399511    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.399511    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.399576    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:06.399576    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.399576    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:06.399638    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.399638    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:06.399703    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:06.399763    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399763    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399826    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399886    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399886    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.399942    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:06.399942    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:06.399942    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.399942    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.400002    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.400002    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.400002    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.400067    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.400067    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.400067    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.400128    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.400128    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.400185    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.400185    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.400246    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.400246    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:06.400246    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:06.400309    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:06.400309    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.400309    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.400375    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.400498    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.400498    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:06.400498    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:06.400563    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:06.400563    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.400622    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.400685    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:06.400685    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:06.400746    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:06.400746    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.400809    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.400809    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:06.400870    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:06.400870    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:06.400870    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:06.400934    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:06.400934    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:06.400995    5644 command_runner.go:130] > Events:
	I0210 12:23:06.400995    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:06.400995    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:06.401059    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:06.401059    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:06.401119    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.401183    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:06.401243    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.401243    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:06.401306    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:06.401306    5644 command_runner.go:130] >   Normal  RegisteredNode           65s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:06.401366    5644 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:06.401366    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:06.401430    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:06.401430    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.401430    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.401790    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.401790    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:06.401834    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.401834    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:06.401962    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.402026    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.402026    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.402026    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.402096    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:06.402096    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:06.402154    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:06.402154    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.402154    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.402154    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:06.402223    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.402278    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:06.402278    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.402338    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:06.402338    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:06.402392    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402480    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402513    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402567    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402626    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.402626    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:06.402789    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.402789    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.402789    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:06.402789    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.402789    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.402789    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:06.402789    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:06.402789    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.402789    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:06.402789    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:06.402789    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:06.402789    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:06.403338    5644 command_runner.go:130] > Events:
	I0210 12:23:06.403338    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:06.403412    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:06.403412    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  Starting                 5m31s                  kube-proxy       
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  RegisteredNode           5m34s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeNotReady             3m39s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  RegisteredNode           65s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:06.412778    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:06.412778    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:06.444858    5644 command_runner.go:130] > .:53
	I0210 12:23:06.445192    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:06.445192    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:06.445192    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:06.445411    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:06.445411    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:06.445554    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:06.445592    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:06.445634    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:06.445752    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:06.445786    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:06.445786    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:06.449513    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:06.449513    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:06.484521    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:06.484571    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.484627    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:06.484627    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:06.484660    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.484660    5644 command_runner.go:130] !  >
	I0210 12:23:06.484660    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.484660    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:06.484711    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:06.484743    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.484743    5644 command_runner.go:130] !  >
	I0210 12:23:06.484808    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:06.484808    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:06.484847    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:06.484921    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:06.484921    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:06.484962    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:06.485002    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:06.485002    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.485042    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:06.485042    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:06.485081    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:06.485121    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:06.486869    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:06.486901    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:06.526784    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:06.526784    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527546    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527546    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527617    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527617    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528456    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530250    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530250    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530797    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530797    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:06.530820    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530820    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531135    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531135    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531928    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531961    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:06.531961    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532598    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532633    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532633    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.550014    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:06.550014    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576699    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:06.576699    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576733    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576733    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576778    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:06.578620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:06.578620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:06.578697    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:06.581922    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.582455    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:06.582506    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:06.582548    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:06.583125    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:06.583175    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583216    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:06.583216    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.583802    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:06.583802    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:06.583845    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:06.584499    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585249    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585292    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585292    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585362    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585483    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585483    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585548    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585548    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585614    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585614    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585847    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586960    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:06.586960    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587165    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.615728    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:06.615728    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:06.646088    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:06.646911    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:06.646911    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:06.646958    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.646958    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:06.647003    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:06.647036    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:06.647036    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:06.647089    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647992    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:06.648026    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:06.649606    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:06.658348    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:06.658420    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:06.686206    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:06.686508    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:06.686545    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:06.686598    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:06.687205    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687246    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:06.687297    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:06.687329    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:06.687911    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687911    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:06.687970    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687970    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:06.695037    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:06.695037    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:06.735927    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:06.737233    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:06.737233    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:06.737233    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:06.737296    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:06.737296    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:06.737358    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:06.737387    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:06.737387    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:06.737424    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:06.737463    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:06.737497    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:06.737497    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:06.737536    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.737536    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:06.737570    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:06.737570    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:06.737608    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:06.737608    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:06.738182    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:06.738182    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:06.738577    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.739569    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:06.739602    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:06.739602    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.739639    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:06.739639    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:06.739670    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:06.739670    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:06.739698    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:06.739733    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.739786    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:06.739786    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:06.739822    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:06.739861    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.739861    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:06.739896    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:06.739896    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:06.740009    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:06.740044    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:06.740083    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:06.740083    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741364    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741364    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.741406    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741406    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741440    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.763529    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:06.763529    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:06.796092    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:06.798055    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798661    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799944    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.800868    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:06.800996    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:06.814618    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:06.814618    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:06.875571    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:06.875571    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:06.875571    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:06.875571    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:06.875571    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:06.875571    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:06.875571    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:06.875571    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:06.875571    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:06.875571    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:06.876102    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:06.876102    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:06.876102    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:06.881046    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:06.881046    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:06.909633    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.910472    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.913348    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:06.913382    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:06.946937    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:06.946937    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.946937    5644 command_runner.go:130] !  >
	I0210 12:23:06.946937    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.947310    5644 command_runner.go:130] !  >
	I0210 12:23:06.947310    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:06.947359    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:06.947359    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:06.947403    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:06.947403    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:06.947527    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:06.947603    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:06.950711    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:06.950777    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:06.981433    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:06.982126    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:06.982216    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:06.982216    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.982216    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:06.982216    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:06.982216    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.986727    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:06.986838    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:07.018971    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.018971    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019032    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019032    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019074    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:07.019630    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:07.019709    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:07.019747    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:07.019747    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.019907    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:07.019907    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:07.019971    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.019996    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:07.020211    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:07.020211    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:07.020439    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:07.020962    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:07.020962    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:07.021486    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021486    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021561    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.021708    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021708    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021782    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.021856    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021930    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021930    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022241    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022241    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022320    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022320    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022396    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022396    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022542    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022542    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022616    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022616    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022690    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022690    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022764    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022764    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:07.023282    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:07.023282    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:07.023355    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.023387    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:07.023936    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.023936    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024010    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024010    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025903    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025903    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.025975    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026007    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026053    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026084    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026124    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026155    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026226    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026266    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026300    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026340    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026413    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026445    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026485    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026517    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026517    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026587    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026659    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026698    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027253    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027253    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027346    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.027982    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:07.029143    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:07.075794    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:07.075794    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:07.098642    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:07.098642    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:07.098692    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:07.098878    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:07.100412    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:07.100412    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:07.132442    5644 command_runner.go:130] > .:53
	I0210 12:23:07.132442    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:07.132442    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:07.132442    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:07.132442    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:09.641102    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:23:09.667905    5644 command_runner.go:130] > 2008
	I0210 12:23:09.667905    5644 api_server.go:72] duration metric: took 1m6.4207823s to wait for apiserver process to appear ...
	I0210 12:23:09.667905    5644 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:23:09.673765    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:09.706089    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:09.706215    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:09.712941    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:09.742583    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:09.742583    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:09.749585    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:09.772165    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:09.772165    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:09.773174    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:09.780166    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:09.806181    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:09.806613    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:09.806889    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:09.815130    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:09.845825    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:09.845825    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:09.845825    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:09.853235    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:09.879290    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:09.879290    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:09.883447    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:09.892307    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:09.920491    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:09.920491    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:09.920861    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:09.920861    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:09.920942    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:09.955462    5644 command_runner.go:130] > .:53
	I0210 12:23:09.955529    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:09.955529    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:09.955529    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:09.955529    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:09.955589    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:09.955589    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:09.955620    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:09.955730    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:09.955730    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:09.955781    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:09.956145    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:09.956145    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:09.959761    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:09.959827    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:09.987722    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:09.987924    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:09.990942    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:09.991023    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:10.021564    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:10.021639    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.021639    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:10.021707    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:10.021707    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.021707    5644 command_runner.go:130] !  >
	I0210 12:23:10.021734    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.021759    5644 command_runner.go:130] !  >
	I0210 12:23:10.021759    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:10.021830    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:10.021830    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:10.021830    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:10.021906    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:10.021927    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:10.021927    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:10.021991    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:10.022140    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:10.022140    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:10.028111    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:10.028111    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:10.064908    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:10.064908    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:10.064964    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:10.064964    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:10.065006    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.065042    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.065114    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:10.065114    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:10.065154    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:10.065154    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:10.065190    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:10.065190    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:10.065228    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:10.065251    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:10.065278    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:10.065302    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:10.065302    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:10.065386    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:10.065415    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:10.065415    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:10.065451    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:10.065451    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:10.065521    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.066113    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.066182    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.066198    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066915    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066915    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066992    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067202    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067202    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.067399    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.067399    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:10.067607    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:10.067607    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:10.067898    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:10.067962    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.067962    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:10.068028    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.068028    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.068094    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068094    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068160    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:10.068222    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:10.068222    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:10.068294    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068294    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068355    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068408    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068467    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069046    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069046    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069143    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069143    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.069269    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.069339    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069339    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069400    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069466    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069466    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069533    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069533    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069600    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069600    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069675    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069696    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069696    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069765    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069765    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.070372    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070372    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070439    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070473    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071021    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071021    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071102    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:10.119694    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:10.119694    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:10.155072    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:10.155155    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:10.155155    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:10.155244    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.155244    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:10.155244    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:10.155301    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.155452    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.155452    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.157636    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:10.157636    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:10.187260    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188405    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:10.188405    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188814    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188814    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.189002    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:10.190675    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190787    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:10.191661    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191661    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192466    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192466    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.192552    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.192590    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.192590    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.192665    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.192665    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.192712    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192712    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:10.196710    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196710    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196736    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197375    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197375    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197397    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197397    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.215323    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:10.215323    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:10.280244    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:10.280244    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:10.280339    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:10.280339    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:10.280339    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:10.280416    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:10.280416    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:10.280474    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:10.280499    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:10.280499    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:10.280564    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:10.280564    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:10.280628    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:10.280628    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:10.280628    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:10.280704    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:10.280704    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:10.282993    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:10.282993    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:10.310270    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.310592    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:10.310592    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:10.310761    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310761    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.310829    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310900    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.310900    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310965    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.310965    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311037    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311037    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311100    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:10.311169    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311169    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:10.311243    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311260    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311330    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311330    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.311400    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311400    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.311471    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311471    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311543    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311543    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:10.311615    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311692    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.311692    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311757    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311757    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311829    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.311892    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311892    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.311962    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311962    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:10.312027    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312027    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:10.312093    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312093    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.312160    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312160    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:10.312251    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312321    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.312321    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:10.312388    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:10.312452    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312452    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.312521    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312521    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.313229    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:10.326226    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:10.326226    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:10.354224    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.354224    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:10.354672    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:10.354672    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:10.354794    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:10.354794    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:10.354875    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:10.354875    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:10.356136    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:10.356136    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:10.356222    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:10.356222    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:10.356911    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.357015    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357048    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:10.357070    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:10.357070    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357260    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.357301    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:10.357301    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:10.357380    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:10.357380    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:10.358031    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:10.358031    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:10.358491    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.358532    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:10.358532    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:10.358734    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:10.358734    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:10.358764    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:10.358764    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:10.359360    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:10.359360    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:10.359637    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:10.359637    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359834    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:10.360038    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:10.383067    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:10.383067    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:10.406933    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.406933    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.406960    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.406960    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.407252    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.407431    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.407431    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.407480    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.407480    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.407513    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408090    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408090    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408179    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408179    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408307    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408370    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408370    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.443126    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:10.443126    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:10.472132    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:10.472973    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:10.473002    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473002    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:10.473057    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:10.473190    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:10.473190    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:10.473331    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:10.473470    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:10.473506    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:10.473506    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:10.473570    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:10.473570    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:10.473638    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:10.473638    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:10.473770    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473770    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:10.481954    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:10.481954    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:10.511959    5644 command_runner.go:130] > .:53
	I0210 12:23:10.511959    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:10.511959    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:10.511959    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:10.511959    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:10.512310    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:10.512381    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:10.545893    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:10.546186    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.546186    5644 command_runner.go:130] !  >
	I0210 12:23:10.546263    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.546263    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:10.546308    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:10.546308    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.546308    5644 command_runner.go:130] !  >
	I0210 12:23:10.546308    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:10.546308    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:10.546389    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:10.546493    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:10.546493    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:10.546592    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:10.546686    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:10.546686    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:10.549890    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:10.549890    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:10.577900    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:10.578179    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:10.578179    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:10.578239    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:10.578315    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:10.578367    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:10.578395    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:10.578395    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:10.579175    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:10.579175    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:10.579230    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:10.579230    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:10.579322    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:10.580000    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:10.580032    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:10.580032    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:10.580102    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:10.580102    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:10.580159    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:10.580192    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:10.580213    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:10.580213    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:10.580213    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:10.580277    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:10.580334    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:10.581072    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:10.581205    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:10.581205    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:10.581243    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.581283    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:10.581283    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:10.581327    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:10.581327    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:10.581327    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:10.581908    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:10.582238    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:10.582238    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:10.582276    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:10.582276    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:10.582317    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:10.582317    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:10.582342    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:10.582370    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:10.582403    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.582435    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.582435    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.583165    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:10.583165    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:10.583708    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:10.583730    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583757    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583757    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583793    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:10.583793    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:10.583832    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:10.583865    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:10.583865    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:10.583903    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:10.583903    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584106    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584106    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584172    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584172    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:10.584530    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584530    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:10.584596    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584865    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584897    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584897    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.584930    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584953    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584974    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.585007    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585045    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585077    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585077    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585107    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.606799    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:10.606799    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:10.628398    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:10.628918    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:10.628949    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:10.628949    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:10.629009    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:10.629009    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:10.630799    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:10.630799    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:10.813774    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:10.813774    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:10.813848    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:10.813968    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.813968    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.813968    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.814031    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:10.814031    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:10.814031    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.814031    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.814031    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:10.814031    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.814031    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:23:09 +0000
	I0210 12:23:10.814092    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.814092    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:10.814092    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:10.814092    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:10.814151    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:10.814188    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:10.814202    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:10.814202    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.814202    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:10.814251    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:10.814284    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.814296    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.814296    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.814296    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.814296    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.814296    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.814296    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.814296    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.814379    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.814379    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.814379    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.814379    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.814379    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:10.814438    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.814438    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.814500    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:10.814559    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:10.814580    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:10.814580    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.814606    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:10.814606    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.814606    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:10.814606    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:10.814606    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:10.814606    5644 command_runner.go:130] > Events:
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 69s                kube-proxy       
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  77s (x8 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    77s (x8 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     77s (x7 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Warning  Rebooted                 72s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   RegisteredNode           69s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:10.814606    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:10.815133    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:10.815133    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.815203    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.815203    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:10.815203    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:10.815203    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.815203    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:10.815203    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.815203    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:10.815203    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:10.815203    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:10.815203    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.815203    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.815203    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:10.815203    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.815203    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.815203    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:10.815203    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:10.815203    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.815742    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.815742    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:10.815742    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:10.815742    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:10.815742    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.815742    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:10.815822    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:10.815822    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:10.815880    5644 command_runner.go:130] > Events:
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:10.815880    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:10.815937    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.815937    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:10.815996    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.816017    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  RegisteredNode           69s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:10.816043    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:10.816043    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.816043    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.816043    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:10.816043    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:10.816043    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.816043    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.816043    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:10.816043    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:10.816043    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.816043    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.816043    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.816043    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.816043    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:10.816043    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.816564    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.816564    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:10.816564    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:10.816564    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:10.816671    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.816671    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.816671    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:10.816671    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:10.816745    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.816745    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.816745    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:10.816805    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:10.816827    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:10.816827    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:10.816854    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:10.816854    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:10.816854    5644 command_runner.go:130] > Events:
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:10.816889    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  Starting                 5m36s                  kube-proxy       
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:10.817084    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  RegisteredNode           5m38s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  NodeNotReady             3m43s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  RegisteredNode           69s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:10.827101    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:10.827101    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:10.857019    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:10.857188    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:10.857188    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:10.857276    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:10.857306    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857306    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:10.857306    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:10.857395    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.857467    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:10.857467    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857467    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:10.857611    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:10.858103    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:10.858175    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858175    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:10.858175    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:10.858419    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858419    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858419    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858482    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:10.858687    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:10.858756    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858756    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:10.858823    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858823    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:10.858823    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:10.858957    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:10.858957    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:10.859157    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:10.859157    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:10.859294    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:10.859786    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:10.859786    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:10.859786    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:13.369944    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:23:13.381445    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:23:13.381779    5644 discovery_client.go:658] "Request Body" body=""
	I0210 12:23:13.381869    5644 round_trippers.go:470] GET https://172.29.129.181:8443/version
	I0210 12:23:13.381869    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:13.381869    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:13.381869    5644 round_trippers.go:480]     Accept: application/json, */*
	I0210 12:23:13.383308    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:23:13.383348    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:13 GMT
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Audit-Id: cab52939-882c-4f1b-a25c-e9ab6bc73e40
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Content-Type: application/json
	I0210 12:23:13.383370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:13.383370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Content-Length: 263
	I0210 12:23:13.383370    5644 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.1",
		  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
		  "gitTreeState": "clean",
		  "buildDate": "2025-01-15T14:31:55Z",
		  "goVersion": "go1.23.4",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0210 12:23:13.383370    5644 api_server.go:141] control plane version: v1.32.1
	I0210 12:23:13.383370    5644 api_server.go:131] duration metric: took 3.7154241s to wait for apiserver health ...
	I0210 12:23:13.383370    5644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:23:13.390216    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:13.421102    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:13.421102    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:13.428840    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:13.451060    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:13.452855    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:13.460558    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:13.491429    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:13.491512    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:13.491551    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:13.498691    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:13.525667    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:13.525667    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:13.525667    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:13.532985    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:13.559926    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:13.559926    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:13.560485    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:13.567497    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:13.593289    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:13.594118    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:13.594186    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:13.601618    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:13.629494    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:13.629585    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:13.629585    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:13.629585    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:13.629585    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:13.658280    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:13.658671    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:13.658671    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:13.658711    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:13.658738    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:13.658738    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:13.658738    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:13.658738    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:13.658738    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.781644       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.781815       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.782562       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.659268    5644 command_runner.go:130] ! I0210 12:23:11.782912       1 main.go:301] handling current node
	I0210 12:23:13.659268    5644 command_runner.go:130] ! I0210 12:23:11.783348       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.659313    5644 command_runner.go:130] ! I0210 12:23:11.783495       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.662159    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:13.662734    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:13.694653    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694653    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694879    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694879    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.694946    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.694946    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:13.695054    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:13.695268    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:13.695268    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:13.695343    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:13.695567    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:13.695567    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:13.695715    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.695715    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.695789    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:13.695789    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:13.695830    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:13.695939    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:13.696013    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:13.696047    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:13.696603    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:13.696603    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:13.697129    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:13.697129    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697203    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697276    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697276    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697349    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.697349    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.697422    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697422    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697495    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.697495    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697642    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697642    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697715    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.697715    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697862    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.697862    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.697935    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.697935    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698008    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.698008    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698082    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:13.698082    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698240    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698314    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698346    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:13.698903    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:13.698903    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:13.698948    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.698984    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.699023    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:13.699057    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.699094    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.699094    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.699129    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.699167    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:13.699200    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:13.699238    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:13.699271    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:13.699309    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699350    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:13.699388    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699422    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699460    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699537    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699610    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:13.699610    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:13.699693    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:13.699693    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699748    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699789    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.700352    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.700396    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700439    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700996    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701035    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.702194    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702194    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702789    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:13.751690    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:13.752317    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:13.776812    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:13.777168    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:13.777209    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:13.777209    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:13.777245    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:13.777245    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:13.779709    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:13.779709    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:13.813408    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:13.813481    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:13.813481    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:13.813564    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:13.813679    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813679    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:13.813679    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:13.813723    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813787    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.813787    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:13.813787    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:13.813833    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:13.813901    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:13.813901    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:13.813946    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813946    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.813946    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:13.814009    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814009    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:13.814054    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:13.814054    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814054    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:13.814109    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814109    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814144    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:13.814144    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814179    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814179    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:13.814232    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:13.814232    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814263    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814324    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814324    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:13.814324    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814369    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814369    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:13.814369    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:13.814431    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:13.814431    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814542    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814542    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:13.814542    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814591    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:13.814591    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814634    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:13.814634    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.814739    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:13.814739    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:13.814959    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:13.815072    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:13.815072    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.815169    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.815169    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:13.815381    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:13.815527    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:13.815631    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:13.815631    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:13.827876    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:13.827876    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:13.856409    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:13.856460    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:13.856499    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.856499    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:13.856565    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:13.856603    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:13.856603    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.856654    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:13.856654    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:13.856700    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:13.856700    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.856742    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:13.856787    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.856835    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:13.856881    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.856928    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:13.856973    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857021    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.857066    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857113    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.857167    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857216    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857216    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857310    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857355    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857404    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:13.857404    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857500    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:13.857532    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857581    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857626    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857668    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.857714    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857762    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.857807    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857855    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857900    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857948    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:13.857948    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858035    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.858080    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858129    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.858129    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858175    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.858267    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858315    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.858315    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858361    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:13.858453    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858453    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:13.858500    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858547    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.858593    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858638    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:13.858686    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858731    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.858773    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:13.858817    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:13.858911    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858952    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.858997    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859089    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859131    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859189    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859233    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859274    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859274    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859317    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859358    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859400    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859440    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859483    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.859523    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859565    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:13.859565    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859606    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.859650    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859685    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:13.874894    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:13.874894    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:13.904521    5644 command_runner.go:130] > .:53
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:13.904521    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:13.904521    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:13.905577    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:13.908320    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:13.908320    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945542    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945706    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945706    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945762    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:13.945762    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:13.950373    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950405    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950405    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.968176    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:13.968176    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:13.997324    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:13.998248    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:13.998248    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:13.998812    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:13.998812    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:13.998862    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:13.999432    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:13.999432    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:13.999493    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:13.999493    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:13.999554    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:13.999659    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:13.999659    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:13.999726    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:14.000283    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:14.000283    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:14.000331    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:14.000358    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:14.000358    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:14.000402    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:14.000448    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:14.000448    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.000487    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.000735    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000735    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000806    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000806    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000863    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:14.000904    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:14.000904    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:14.000939    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:14.001528    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:14.001568    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:14.001568    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:14.001609    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.001609    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:14.001698    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:14.001698    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:14.002032    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:14.002071    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:14.002112    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.002112    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:14.002721    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:14.002721    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003399    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004063    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:14.004063    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004129    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:14.004158    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004573    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.026367    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:14.026367    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:14.089377    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:14.089487    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:14.089487    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:14.089598    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:14.089645    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:14.089645    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:14.089645    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:14.089645    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:14.089645    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:14.089645    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:14.089645    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:14.089645    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:14.095613    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:14.095643    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:14.125030    5644 command_runner.go:130] > .:53
	I0210 12:23:14.125112    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:14.125112    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:14.125179    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:14.125179    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:14.125179    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:14.125179    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:14.154258    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:14.154579    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:14.155955    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:14.155955    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:14.184020    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:14.184020    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.184325    5644 command_runner.go:130] !  >
	I0210 12:23:14.184379    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.184379    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:14.184425    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:14.184425    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.184425    5644 command_runner.go:130] !  >
	I0210 12:23:14.184479    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:14.184502    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:14.184540    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:14.184540    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:14.184583    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:14.184583    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:14.184694    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:14.184694    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:14.184733    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:14.184733    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:14.184781    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:14.184781    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:14.184820    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:14.184820    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:14.187730    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:14.187765    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:14.222967    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:14.222967    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:14.223015    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:14.223015    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:14.223048    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:14.223092    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:14.223092    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:14.223138    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:14.223138    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:14.223185    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:14.223185    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:14.223230    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:14.223230    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:14.223278    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:14.223278    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:14.223323    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:14.223323    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:14.223370    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:14.223370    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:14.223414    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:14.223414    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:14.223460    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:14.223460    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:14.223504    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:14.223551    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:14.223596    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:14.223643    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:14.223643    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:14.223687    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:14.223728    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:14.223766    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:14.223793    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:14.223833    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:14.223874    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:14.223960    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:14.223993    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:14.224043    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:14.224043    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:14.224074    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:14.224143    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:14.224176    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:14.224176    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:14.224224    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:14.224270    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:14.224270    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.224300    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:14.224318    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:14.225114    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.225148    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225170    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225198    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:14.225260    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:14.225288    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:14.225288    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:14.225322    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:14.225322    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:14.225361    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:14.225361    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:14.225396    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:14.225435    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:14.225435    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:14.225470    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:14.225509    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:14.225509    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:14.225544    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:14.225544    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:14.225584    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:14.225584    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:14.225618    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:14.225658    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:14.225658    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:14.225692    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:14.225692    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:14.225731    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:14.225731    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:14.225766    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:14.225805    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:14.225805    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:14.225841    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:14.225841    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:14.225880    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:14.225916    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:14.225916    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:14.225955    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:14.225955    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:14.225990    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:14.225990    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:14.226029    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:14.226029    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:14.226064    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:14.226104    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:14.226139    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:14.226139    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:14.226179    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:14.226179    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:14.226214    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:14.226214    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:14.226253    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:14.226859    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:14.226859    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:14.227427    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:14.227476    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:14.227476    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:14.243827    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:14.243827    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:14.436064    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:14.436064    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:14.436064    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:14.436341    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.436341    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:14.436341    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:14.436433    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.436433    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.436537    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:14.436537    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.436537    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:23:09 +0000
	I0210 12:23:14.436537    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.436537    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:14.436537    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:14.436644    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:14.436644    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:14.436644    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:14.436644    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:14.436644    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.436644    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:14.436644    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:14.436644    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.436644    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.436769    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.436769    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.436769    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.436769    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.436769    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.436769    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.436769    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.436769    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:14.436769    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:14.436769    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:14.436877    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.436877    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.436985    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.436985    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.436985    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:14.437050    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:14.437050    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:14.437050    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.437091    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.437091    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:14.437091    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437291    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437291    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:14.437291    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.437291    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:14.437291    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:14.437291    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:14.437426    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:14.437426    5644 command_runner.go:130] > Events:
	I0210 12:23:14.437426    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:14.437426    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:14.437426    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:14.437507    5644 command_runner.go:130] >   Normal   Starting                 73s                kube-proxy       
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   Starting                 82s                kubelet          Starting kubelet.
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Warning  Rebooted                 76s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Normal   RegisteredNode           73s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:14.437753    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:14.437753    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:14.437828    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.437828    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.437953    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.437953    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:14.437975    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:14.438008    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:14.438008    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.438008    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.438038    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:14.438038    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.438038    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:14.438038    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.438038    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:14.438038    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:14.438038    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.438129    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:14.438129    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:14.438129    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.438202    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.438202    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.438202    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.438232    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.438232    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.438232    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.438232    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.438232    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.438278    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.438278    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.438278    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.438278    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.438278    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:14.438278    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.438370    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.438467    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:14.438467    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:14.438467    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:14.438467    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.438527    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.439532    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:14.439618    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:14.440021    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:14.440021    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.440021    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.440021    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:14.440021    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:14.440135    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:14.440240    5644 command_runner.go:130] > Events:
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:14.440240    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  RegisteredNode           73s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:14.440323    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:14.440426    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:14.440426    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.440549    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.440549    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:14.440549    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:14.440549    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.440549    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.440549    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:14.440549    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.440642    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:14.440642    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.440642    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:14.440642    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:14.440642    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440642    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440730    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440763    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.441204    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.441204    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:14.441204    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:14.441204    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.441273    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.441273    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.441306    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.441306    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.441306    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.441333    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.441333    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.441333    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.441333    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.441333    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.441333    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.441333    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.441333    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:14.441333    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:14.441333    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.441438    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.441438    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:14.441438    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:14.441529    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:14.441529    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.441529    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.441564    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:14.441564    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:14.441564    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.441564    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.441564    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:14.441651    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:14.441651    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:14.441727    5644 command_runner.go:130] > Events:
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:14.441727    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0210 12:23:14.441793    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.441793    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:14.441835    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  RegisteredNode           5m42s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeReady                5m28s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeNotReady             3m47s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:14.442144    5644 command_runner.go:130] >   Normal  RegisteredNode           73s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:14.452880    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:14.452880    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:14.488027    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:14.488599    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:14.488711    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:14.488754    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:14.488754    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:14.488802    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:14.488852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:14.488995    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:14.489038    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:14.489038    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:14.489084    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:14.489134    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:14.489180    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:14.489221    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:14.489221    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:14.489267    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:14.489314    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:14.489360    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:14.489360    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:14.489401    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:14.489446    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:14.489495    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:14.489495    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489581    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:14.489676    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:14.489723    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:14.489764    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:14.489803    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:14.489852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:14.489852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:14.489943    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:14.498219    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:14.498755    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:14.525515    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:14.525982    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.526017    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:14.526065    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:14.526065    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.526065    5644 command_runner.go:130] !  >
	I0210 12:23:14.526101    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.526101    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:14.526147    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:14.526147    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.526147    5644 command_runner.go:130] !  >
	I0210 12:23:14.526147    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:14.526188    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:14.526188    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:14.526241    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:14.526241    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:14.526329    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:14.526329    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:14.526425    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:14.526425    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:14.526467    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:14.526513    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:14.526513    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:14.526554    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:14.526554    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:14.530209    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:14.530260    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:14.561305    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.561370    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.561424    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.561424    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.561540    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.561540    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.561609    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.561672    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.561798    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.561798    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.561859    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.561921    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.561976    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.562037    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562094    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562154    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:14.562154    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562211    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.562211    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.562272    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.562337    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.562397    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.562452    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.562452    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562512    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562578    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:14.562578    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562638    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:14.562693    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562693    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562755    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:14.562812    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:14.562872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:14.562872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:14.562937    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:14.562998    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:14.563055    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:14.563115    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:14.563172    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563233    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563233    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563288    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563349    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563403    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563463    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563520    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563520    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563581    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563646    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563765    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563827    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:14.563891    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:14.563891    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:14.563954    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:14.564012    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:14.564073    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:14.564073    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:14.564133    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:14.564195    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:14.564254    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.564316    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.564375    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:14.564375    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:14.564437    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564494    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564563    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564683    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564743    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564743    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564804    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564861    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.564922    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.564977    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565036    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565092    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565151    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565206    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565265    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565265    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565322    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565382    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565437    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565497    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565497    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:14.565563    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565622    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565677    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:14.565737    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:14.565797    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:14.565858    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:14.565918    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:14.566025    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.566071    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:14.566136    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:14.566136    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:14.566196    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:14.566196    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:14.566266    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:14.566325    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:14.566381    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:14.566441    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:14.566496    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:14.566496    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:14.566557    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:14.566612    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:14.566671    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:14.566728    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:14.566728    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:14.566791    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:14.566791    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:14.566849    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:14.566909    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:14.566965    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:14.566965    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:14.567079    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:14.567079    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:14.567138    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:14.567138    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:14.567203    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:14.567203    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:14.567263    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:14.567320    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:14.567381    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:14.567446    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:14.567495    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:14.567532    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:14.567581    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567636    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567684    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567738    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567799    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567892    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567952    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568003    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568058    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568109    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.568164    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568215    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.568300    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:14.568353    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:14.568403    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:14.568459    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:14.568511    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:14.568511    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:14.568567    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:14.568617    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:14.568725    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:14.568781    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.568833    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.568887    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:14.568937    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:14.568937    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569004    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569064    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569170    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:14.569767    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:14.569767    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:14.570434    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.570434    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:14.570534    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571114    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571163    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:17.102810    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.102894    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:23:17.102894    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.103022    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.103022    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.107460    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:17.107460    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.107551    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.107551    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Audit-Id: c55d65cc-0aaf-4210-b363-41902862e56b
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.112916    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e8 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  35 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |5....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309986 chars]
	 >
	I0210 12:23:17.112975    5644 system_pods.go:59] 12 kube-system pods found
	I0210 12:23:17.113498    5644 system_pods.go:61] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 12:23:17.113595    5644 system_pods.go:74] duration metric: took 3.7301832s to wait for pod list to return data ...
	I0210 12:23:17.113652    5644 default_sa.go:34] waiting for default service account to be created ...
	I0210 12:23:17.113751    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.113842    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/default/serviceaccounts
	I0210 12:23:17.113871    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.113926    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.113926    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.117684    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:17.117684    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.117767    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.117767    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Content-Length: 129
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Audit-Id: e369e7f6-226a-465d-a481-2c18c67e8037
	I0210 12:23:17.117843    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 31 39  38 35 1a 00 12 4f 0a 4d  |......1985...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  34 61 64 66 62 64 33 35  |ault".*$4adfbd35|
		00000050  2d 66 38 62 36 2d 34 36  30 66 2d 38 38 65 39 2d  |-f8b6-460f-88e9-|
		00000060  65 37 34 63 34 36 62 30  32 66 30 65 32 03 33 33  |e74c46b02f0e2.33|
		00000070  36 38 00 42 08 08 90 d4  a7 bd 06 10 00 1a 00 22  |68.B..........."|
		00000080  00                                                |.|
	 >
	I0210 12:23:17.117907    5644 default_sa.go:45] found service account: "default"
	I0210 12:23:17.117907    5644 default_sa.go:55] duration metric: took 4.255ms for default service account to be created ...
	I0210 12:23:17.117907    5644 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 12:23:17.117974    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.118046    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:23:17.118046    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.118046    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.118109    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.123079    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:17.123079    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Audit-Id: 73f9716e-588b-4572-98bf-a3a721435868
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.123079    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.123079    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.124466    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e8 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  35 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |5....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309986 chars]
	 >
	I0210 12:23:17.126124    5644 system_pods.go:86] 12 kube-system pods found
	I0210 12:23:17.126124    5644 system_pods.go:89] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 12:23:17.126124    5644 system_pods.go:89] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 12:23:17.126250    5644 system_pods.go:126] duration metric: took 8.3427ms to wait for k8s-apps to be running ...
	I0210 12:23:17.126250    5644 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:23:17.134037    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:23:17.157445    5644 system_svc.go:56] duration metric: took 31.1949ms WaitForService to wait for kubelet
	I0210 12:23:17.157445    5644 kubeadm.go:582] duration metric: took 1m13.9102392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:23:17.157445    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:23:17.157445    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.157445    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:23:17.157445    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.157445    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.157445    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.161324    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:17.161408    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.161408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.161408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Audit-Id: ace23cdd-2fc5-4cbf-ad50-eb4ff866d35a
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.161408    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ad 62 0a  0a 0a 00 12 04 31 39 38  |List..b......198|
		00000020  35 1a 00 12 d4 24 0a f8  11 0a 10 6d 75 6c 74 69  |5....$.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 39 33 35 38 00 42  |1e01b262.19358.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 61299 chars]
	 >
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:105] duration metric: took 4.6198ms to run NodePressure ...
	I0210 12:23:17.162065    5644 start.go:241] waiting for startup goroutines ...
	I0210 12:23:17.162065    5644 start.go:246] waiting for cluster config update ...
	I0210 12:23:17.162065    5644 start.go:255] writing updated cluster config ...
	I0210 12:23:17.168681    5644 out.go:201] 
	I0210 12:23:17.172786    5644 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:23:17.183532    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:23:17.183532    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:17.189676    5644 out.go:177] * Starting "multinode-032400-m02" worker node in "multinode-032400" cluster
	I0210 12:23:17.192084    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:23:17.192084    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:23:17.192084    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:23:17.192084    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:23:17.192084    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:17.195181    5644 start.go:360] acquireMachinesLock for multinode-032400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:23:17.195285    5644 start.go:364] duration metric: took 103.8µs to acquireMachinesLock for "multinode-032400-m02"
	I0210 12:23:17.195285    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:23:17.195285    5644 fix.go:54] fixHost starting: m02
	I0210 12:23:17.196188    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:19.198848    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:23:19.198924    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:19.198924    5644 fix.go:112] recreateIfNeeded on multinode-032400-m02: state=Stopped err=<nil>
	W0210 12:23:19.198924    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:23:19.209028    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400-m02" ...
	I0210 12:23:19.211192    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400-m02
	I0210 12:23:22.083127    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:22.083152    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:22.083201    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:23:22.083201    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:24.140742    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:24.140742    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:24.140841    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:26.441765    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:26.441765    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:27.442501    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:29.431689    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:29.432150    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:29.432150    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:31.738501    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:31.738501    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:32.739670    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:37.045458    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:37.045458    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:38.046750    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:42.326568    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:42.326886    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:43.327458    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:45.342394    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:45.342440    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:45.342440    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:47.878260    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:47.878260    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:47.880151    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:52.188646    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:52.188646    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:52.188646    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:52.191023    5644 machine.go:93] provisionDockerMachine start ...
	I0210 12:23:52.191023    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:56.541564    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:56.541639    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:56.545563    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:23:56.545850    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:23:56.545850    5644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:23:56.683557    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 12:23:56.683557    5644 buildroot.go:166] provisioning hostname "multinode-032400-m02"
	I0210 12:23:56.683557    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:58.663919    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:58.664349    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:58.664349    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:01.014069    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:01.015071    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:01.019435    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:01.020254    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:01.020254    5644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400-m02 && echo "multinode-032400-m02" | sudo tee /etc/hostname
	I0210 12:24:01.189968    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400-m02
	
	I0210 12:24:01.189968    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:05.477216    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:05.477353    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:05.480851    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:05.481493    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:05.481493    5644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:24:05.628216    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:24:05.628216    5644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 12:24:05.628216    5644 buildroot.go:174] setting up certificates
	I0210 12:24:05.628216    5644 provision.go:84] configureAuth start
	I0210 12:24:05.628216    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:09.948691    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:09.948802    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:09.948802    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:11.915658    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:11.915713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:11.915713    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:14.276609    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:14.277325    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:14.277325    5644 provision.go:143] copyHostCerts
	I0210 12:24:14.277496    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 12:24:14.277708    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 12:24:14.277708    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 12:24:14.278095    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 12:24:14.279089    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 12:24:14.279293    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 12:24:14.279370    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 12:24:14.279596    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 12:24:14.280557    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 12:24:14.280904    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 12:24:14.280904    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 12:24:14.281246    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 12:24:14.282054    5644 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400-m02 san=[127.0.0.1 172.29.131.248 localhost minikube multinode-032400-m02]
	I0210 12:24:14.642218    5644 provision.go:177] copyRemoteCerts
	I0210 12:24:14.650320    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:24:14.650320    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:16.615542    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:16.616001    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:16.616114    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:18.962964    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:18.962964    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:18.963767    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:19.076516    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4261467s)
	I0210 12:24:19.076516    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 12:24:19.076516    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:24:19.122777    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 12:24:19.123202    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0210 12:24:19.166902    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 12:24:19.166902    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 12:24:19.212748    5644 provision.go:87] duration metric: took 13.5843137s to configureAuth
	I0210 12:24:19.212797    5644 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:24:19.213726    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:24:19.213829    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:21.161961    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:21.161961    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:21.162095    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:23.516042    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:23.516042    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:23.521196    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:23.521696    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:23.521696    5644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 12:24:23.664239    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 12:24:23.664367    5644 buildroot.go:70] root file system type: tmpfs
	I0210 12:24:23.664464    5644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 12:24:23.664464    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:27.960913    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:27.960913    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:27.964881    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:27.965496    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:27.965496    5644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.129.181"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 12:24:28.125594    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.129.181
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 12:24:28.125594    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:30.098342    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:30.098342    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:30.098516    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:32.444001    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:32.444001    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:32.447444    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:32.448225    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:32.448300    5644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 12:24:34.777404    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 12:24:34.777404    5644 machine.go:96] duration metric: took 42.5859079s to provisionDockerMachine
	I0210 12:24:34.777951    5644 start.go:293] postStartSetup for "multinode-032400-m02" (driver="hyperv")
	I0210 12:24:34.777951    5644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:24:34.786105    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:24:34.786105    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:36.697243    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:36.698259    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:36.698357    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:39.033911    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:39.033911    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:39.033911    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:39.151164    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3649569s)
	I0210 12:24:39.160325    5644 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:24:39.169851    5644 command_runner.go:130] > NAME=Buildroot
	I0210 12:24:39.169851    5644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 12:24:39.169851    5644 command_runner.go:130] > ID=buildroot
	I0210 12:24:39.169851    5644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 12:24:39.169851    5644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 12:24:39.169851    5644 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:24:39.169851    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 12:24:39.170468    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 12:24:39.170614    5644 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 12:24:39.170614    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 12:24:39.183845    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 12:24:39.202537    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 12:24:39.249074    5644 start.go:296] duration metric: took 4.471073s for postStartSetup
	I0210 12:24:39.249074    5644 fix.go:56] duration metric: took 1m22.0528783s for fixHost
	I0210 12:24:39.249074    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:41.226969    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:41.227236    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:41.227236    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:43.586713    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:43.586713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:43.592978    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:43.593664    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:43.593664    5644 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:24:43.727060    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739190283.741836729
	
	I0210 12:24:43.727060    5644 fix.go:216] guest clock: 1739190283.741836729
	I0210 12:24:43.727060    5644 fix.go:229] Guest: 2025-02-10 12:24:43.741836729 +0000 UTC Remote: 2025-02-10 12:24:39.2490741 +0000 UTC m=+281.750914501 (delta=4.492762629s)
	I0210 12:24:43.727060    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:45.724935    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:45.724935    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:45.725736    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:48.106037    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:48.106037    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:48.109940    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:48.110418    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:48.110418    5644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739190283
	I0210 12:24:48.254581    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 12:24:43 UTC 2025
	
	I0210 12:24:48.254581    5644 fix.go:236] clock set: Mon Feb 10 12:24:43 UTC 2025
	 (err=<nil>)
	I0210 12:24:48.254581    5644 start.go:83] releasing machines lock for "multinode-032400-m02", held for 1m31.0582855s
	I0210 12:24:48.254581    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:50.249813    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:50.249813    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:50.250519    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:52.657496    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:52.657496    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:52.660492    5644 out.go:177] * Found network options:
	I0210 12:24:52.662856    5644 out.go:177]   - NO_PROXY=172.29.129.181
	W0210 12:24:52.665365    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:24:52.667392    5644 out.go:177]   - NO_PROXY=172.29.129.181
	W0210 12:24:52.669599    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 12:24:52.670616    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:24:52.672214    5644 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 12:24:52.672214    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:52.679577    5644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:24:52.679577    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:54.731498    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:54.731498    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:54.731578    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:57.164963    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:57.165122    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:57.165464    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:57.187790    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:57.188041    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:57.188374    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:57.258869    5644 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 12:24:57.258869    5644 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5866035s)
	W0210 12:24:57.258869    5644 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 12:24:57.278221    5644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0210 12:24:57.278860    5644 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5992312s)
	W0210 12:24:57.278860    5644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:24:57.287302    5644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:24:57.317125    5644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 12:24:57.317125    5644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:24:57.317229    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:24:57.317229    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:24:57.350868    5644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 12:24:57.358506    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 12:24:57.375543    5644 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 12:24:57.375605    5644 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 12:24:57.395962    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:24:57.415892    5644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:24:57.423871    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:24:57.449590    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:24:57.481964    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:24:57.509276    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:24:57.536583    5644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:24:57.563168    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:24:57.593200    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:24:57.620991    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:24:57.653609    5644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:24:57.670590    5644 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:24:57.670590    5644 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:24:57.682043    5644 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:24:57.717472    5644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:24:57.740341    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:24:57.920866    5644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:24:57.952087    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:24:57.959342    5644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 12:24:57.980137    5644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Unit]
	I0210 12:24:57.980137    5644 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 12:24:57.980137    5644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 12:24:57.980137    5644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 12:24:57.980137    5644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 12:24:57.980137    5644 command_runner.go:130] > StartLimitBurst=3
	I0210 12:24:57.980137    5644 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Service]
	I0210 12:24:57.980137    5644 command_runner.go:130] > Type=notify
	I0210 12:24:57.980137    5644 command_runner.go:130] > Restart=on-failure
	I0210 12:24:57.980137    5644 command_runner.go:130] > Environment=NO_PROXY=172.29.129.181
	I0210 12:24:57.980137    5644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 12:24:57.980137    5644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 12:24:57.980137    5644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 12:24:57.980137    5644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 12:24:57.980137    5644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 12:24:57.980137    5644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 12:24:57.980137    5644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecStart=
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 12:24:57.980137    5644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitNOFILE=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitNPROC=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitCORE=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 12:24:57.980137    5644 command_runner.go:130] > TasksMax=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > TimeoutStartSec=0
	I0210 12:24:57.980137    5644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 12:24:57.980137    5644 command_runner.go:130] > Delegate=yes
	I0210 12:24:57.980137    5644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 12:24:57.980137    5644 command_runner.go:130] > KillMode=process
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Install]
	I0210 12:24:57.980137    5644 command_runner.go:130] > WantedBy=multi-user.target
	I0210 12:24:57.989468    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:24:58.016992    5644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:24:58.055601    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:24:58.089433    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:24:58.125959    5644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:24:58.187663    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:24:58.211671    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:24:58.245861    5644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 12:24:58.256595    5644 ssh_runner.go:195] Run: which cri-dockerd
	I0210 12:24:58.262484    5644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 12:24:58.269967    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 12:24:58.287861    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 12:24:58.326343    5644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 12:24:58.514534    5644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 12:24:58.720409    5644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 12:24:58.720409    5644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 12:24:58.767420    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:24:58.952672    5644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 12:25:01.611230    5644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.658486s)
	I0210 12:25:01.619503    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 12:25:01.650291    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:25:01.683544    5644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 12:25:01.871163    5644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 12:25:02.069288    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:02.255320    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 12:25:02.293599    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:25:02.326826    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:02.527366    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 12:25:02.634096    5644 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 12:25:02.643249    5644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 12:25:02.651571    5644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 12:25:02.651689    5644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 12:25:02.651689    5644 command_runner.go:130] > Device: 0,22	Inode: 853         Links: 1
	I0210 12:25:02.651689    5644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 12:25:02.651689    5644 command_runner.go:130] > Access: 2025-02-10 12:25:02.569124993 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] > Modify: 2025-02-10 12:25:02.569124993 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] > Change: 2025-02-10 12:25:02.573125009 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] >  Birth: -
	I0210 12:25:02.651903    5644 start.go:563] Will wait 60s for crictl version
	I0210 12:25:02.663192    5644 ssh_runner.go:195] Run: which crictl
	I0210 12:25:02.669653    5644 command_runner.go:130] > /usr/bin/crictl
	I0210 12:25:02.678491    5644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:25:02.728521    5644 command_runner.go:130] > Version:  0.1.0
	I0210 12:25:02.728521    5644 command_runner.go:130] > RuntimeName:  docker
	I0210 12:25:02.728521    5644 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 12:25:02.728653    5644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 12:25:02.728653    5644 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 12:25:02.735209    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:25:02.768431    5644 command_runner.go:130] > 27.4.0
	I0210 12:25:02.778520    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:25:02.809258    5644 command_runner.go:130] > 27.4.0
	I0210 12:25:02.814420    5644 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 12:25:02.816674    5644 out.go:177]   - env NO_PROXY=172.29.129.181
	I0210 12:25:02.818497    5644 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 12:25:02.825115    5644 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 12:25:02.825115    5644 ip.go:214] interface addr: 172.29.128.1/20
	I0210 12:25:02.835018    5644 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 12:25:02.841330    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:25:02.862035    5644 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:25:02.862699    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:02.862900    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:04.804561    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:04.804561    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:04.804561    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:04.804561    5644 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.131.248
	I0210 12:25:04.804561    5644 certs.go:194] generating shared ca certs ...
	I0210 12:25:04.804561    5644 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:25:04.806473    5644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 12:25:04.807010    5644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 12:25:04.807260    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 12:25:04.807509    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 12:25:04.807719    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 12:25:04.807827    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 12:25:04.808418    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 12:25:04.808812    5644 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 12:25:04.808920    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 12:25:04.809297    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 12:25:04.809661    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 12:25:04.809942    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 12:25:04.810493    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 12:25:04.810867    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:04.811091    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 12:25:04.811257    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 12:25:04.811534    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:25:04.861560    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:25:04.910911    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:25:04.959161    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:25:05.004438    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:25:05.048411    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 12:25:05.091405    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 12:25:05.144508    5644 ssh_runner.go:195] Run: openssl version
	I0210 12:25:05.152921    5644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 12:25:05.161065    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:25:05.188558    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.195514    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.195514    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.204912    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.212912    5644 command_runner.go:130] > b5213941
	I0210 12:25:05.220878    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:25:05.248708    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 12:25:05.276086    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.281881    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.281881    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.290375    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.299808    5644 command_runner.go:130] > 51391683
	I0210 12:25:05.306556    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 12:25:05.334484    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 12:25:05.360534    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.367955    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.367955    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.376081    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.384140    5644 command_runner.go:130] > 3ec20f2e
	I0210 12:25:05.392100    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 12:25:05.418949    5644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:25:05.425292    5644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:25:05.425483    5644 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:25:05.425483    5644 kubeadm.go:934] updating node {m02 172.29.131.248 8443 v1.32.1 docker false true} ...
	I0210 12:25:05.425483    5644 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.131.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:25:05.433682    5644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubeadm
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubectl
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubelet
	I0210 12:25:05.450443    5644 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:25:05.458809    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0210 12:25:05.475912    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0210 12:25:05.506609    5644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:25:05.546168    5644 ssh_runner.go:195] Run: grep 172.29.129.181	control-plane.minikube.internal$ /etc/hosts
	I0210 12:25:05.551914    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.129.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:25:05.584294    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:05.782369    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:25:05.808315    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:05.809125    5644 start.go:317] joinCluster: &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:25:05.809248    5644 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:05.809357    5644 host.go:66] Checking if "multinode-032400-m02" exists ...
	I0210 12:25:05.809819    5644 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:25:05.810372    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:05.810807    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:07.823836    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:07.824375    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:07.824375    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:07.824860    5644 api_server.go:166] Checking apiserver status ...
	I0210 12:25:07.834305    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:25:07.834397    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:12.158950    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:25:12.159907    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:12.159907    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:25:12.280626    5644 command_runner.go:130] > 2008
	I0210 12:25:12.280626    5644 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4462345s)
	I0210 12:25:12.290347    5644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup
	W0210 12:25:12.310104    5644 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:25:12.318612    5644 ssh_runner.go:195] Run: ls
	I0210 12:25:12.325212    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:25:12.332198    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:25:12.339200    5644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-032400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0210 12:25:12.513450    5644 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tv6gk, kube-system/kube-proxy-xltxj
	I0210 12:25:15.532801    5644 command_runner.go:130] > node/multinode-032400-m02 cordoned
	I0210 12:25:15.532801    5644 command_runner.go:130] > pod "busybox-58667487b6-4g8jw" has DeletionTimestamp older than 1 seconds, skipping
	I0210 12:25:15.532801    5644 command_runner.go:130] > node/multinode-032400-m02 drained
	I0210 12:25:15.532912    5644 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-032400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1936772s)
	I0210 12:25:15.532912    5644 node.go:128] successfully drained node "multinode-032400-m02"
	I0210 12:25:15.532912    5644 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0210 12:25:15.533097    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:25:17.481645    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:17.482575    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:17.482732    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:19.820235    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:25:19.820235    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:19.820235    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:25:20.249336    5644 command_runner.go:130] ! W0210 12:25:20.264957    1673 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0210 12:25:20.442489    5644 command_runner.go:130] ! W0210 12:25:20.458057    1673 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod f267a1e310221fa8fbfbcd980a9fc281a6f751038e4108cbe85aa524b948addc: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-58667487b6-4g8jw_default" network: cni config uninitialized
	I0210 12:25:20.460040    5644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Stopping the kubelet service
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0210 12:25:20.460215    5644 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0210 12:25:20.460215    5644 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0210 12:25:20.460254    5644 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0210 12:25:20.460254    5644 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0210 12:25:20.460254    5644 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0210 12:25:20.460254    5644 command_runner.go:130] > to reset your system's IPVS tables.
	I0210 12:25:20.460254    5644 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0210 12:25:20.460254    5644 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0210 12:25:20.460254    5644 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.9272875s)
	I0210 12:25:20.460254    5644 node.go:155] successfully reset node "multinode-032400-m02"
	I0210 12:25:20.461538    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:25:20.461844    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:25:20.463341    5644 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 12:25:20.463809    5644 type.go:296] "Request Body" body=<
		00000000  6b 38 73 00 0a 13 0a 02  76 31 12 0d 44 65 6c 65  |k8s.....v1..Dele|
		00000010  74 65 4f 70 74 69 6f 6e  73 12 00 1a 00 22 00     |teOptions....".|
	 >
	I0210 12:25:20.463886    5644 round_trippers.go:470] DELETE https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:20.463969    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:20.463984    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:20.464008    5644 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:20.464008    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:20.481499    5644 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0210 12:25:20.481499    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:20.481499    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:20.481499    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Content-Length: 120
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:20 GMT
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Audit-Id: 66d6adfd-6ae5-4dd1-8efe-5fffcc792a37
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:20.481499    5644 type.go:296] "Response Body" body=<
		00000000  6b 38 73 00 0a 0c 0a 02  76 31 12 06 53 74 61 74  |k8s.....v1..Stat|
		00000010  75 73 12 60 0a 06 0a 00  12 00 1a 00 12 07 53 75  |us.`..........Su|
		00000020  63 63 65 73 73 1a 00 22  00 2a 47 0a 14 6d 75 6c  |ccess..".*G..mul|
		00000030  74 69 6e 6f 64 65 2d 30  33 32 34 30 30 2d 6d 30  |tinode-032400-m0|
		00000040  32 12 00 1a 05 6e 6f 64  65 73 28 00 32 24 62 30  |2....nodes(.2$b0|
		00000050  35 36 31 63 32 32 2d 64  62 66 32 2d 34 32 61 30  |561c22-dbf2-42a0|
		00000060  2d 62 64 66 33 2d 34 65  30 61 62 37 61 39 61 66  |-bdf3-4e0ab7a9af|
		00000070  30 65 30 00 1a 00 22 00                           |0e0...".|
	 >
	I0210 12:25:20.481499    5644 node.go:180] successfully deleted node "multinode-032400-m02"
	I0210 12:25:20.481499    5644 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:20.481499    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 12:25:20.481499    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:22.402208    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:22.402208    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:22.402305    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:24.737098    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:25:24.737098    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:24.737322    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:25:25.136064    5644 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 12:25:25.137199    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.655607s)
	I0210 12:25:25.137281    5644 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:25.137338    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02"
	I0210 12:25:25.319012    5644 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:25:27.176195    5644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 12:25:27.176288    5644 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0210 12:25:27.176288    5644 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001636368s
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0210 12:25:27.176362    5644 command_runner.go:130] > This node has joined the cluster:
	I0210 12:25:27.176362    5644 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0210 12:25:27.176362    5644 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0210 12:25:27.176362    5644 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0210 12:25:27.176435    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02": (2.0390169s)
	I0210 12:25:27.176435    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 12:25:27.405789    5644 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0210 12:25:27.606422    5644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-032400-m02 minikube.k8s.io/updated_at=2025_02_10T12_25_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=multinode-032400 minikube.k8s.io/primary=false
	I0210 12:25:27.731391    5644 command_runner.go:130] > node/multinode-032400-m02 labeled
	I0210 12:25:27.731471    5644 start.go:319] duration metric: took 21.9221029s to joinCluster
	I0210 12:25:27.731679    5644 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:27.731834    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:27.735110    5644 out.go:177] * Verifying Kubernetes components...
	I0210 12:25:27.745144    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:27.940918    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:25:27.966711    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:25:27.967147    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:25:27.967869    5644 node_ready.go:35] waiting up to 6m0s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:25:27.967999    5644 type.go:168] "Request Body" body=""
	I0210 12:25:27.968087    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:27.968087    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:27.968087    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:27.968139    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:27.972126    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:27.972126    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:27.972126    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:27 GMT
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Audit-Id: ec8f6606-8dc0-4cc9-bb6f-d3d7d465f067
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:27.972215    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:27.972300    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:28.468102    5644 type.go:168] "Request Body" body=""
	I0210 12:25:28.468102    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:28.468102    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:28.468102    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:28.468102    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:28.472832    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:28.472977    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:28 GMT
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Audit-Id: e52cb391-a49f-459b-8c42-c0be76a90d4c
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:28.472977    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:28.472977    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:28.473258    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:28.968500    5644 type.go:168] "Request Body" body=""
	I0210 12:25:28.968994    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:28.968994    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:28.968994    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:28.968994    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:28.985765    5644 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 12:25:28.985765    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Audit-Id: ca971a1a-fc40-47c5-a0ce-1edffed8631a
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:28.985765    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:28.985765    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:28.985765    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.468760    5644 type.go:168] "Request Body" body=""
	I0210 12:25:29.469059    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:29.469059    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:29.469059    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:29.469121    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:29.475962    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:29.475962    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Audit-Id: bfa2cc92-71f4-4a96-a592-67e401efbe79
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:29.475962    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:29.475962    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:29.475962    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.969144    5644 type.go:168] "Request Body" body=""
	I0210 12:25:29.969144    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:29.969144    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:29.969144    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:29.969144    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:29.973489    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:29.973489    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Audit-Id: f5059816-4599-4ab2-93f8-779836c763dc
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:29.973489    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:29.973489    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:29.973489    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.973489    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:30.468684    5644 type.go:168] "Request Body" body=""
	I0210 12:25:30.468684    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:30.468684    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:30.468684    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:30.468684    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:30.472638    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:30.472754    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:30 GMT
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Audit-Id: 385ffe75-f36e-4aad-9ec3-ec5568bf9e6a
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:30.472754    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:30.472754    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:30.472852    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:30.473247    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:30.968272    5644 type.go:168] "Request Body" body=""
	I0210 12:25:30.968272    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:30.968272    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:30.968272    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:30.968272    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:30.972445    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:30.972738    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:30.972738    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:30 GMT
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Audit-Id: 5598775f-26e3-4733-876e-f6e15bb479de
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:30.972788    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:30.972788    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:30.973003    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:31.468362    5644 type.go:168] "Request Body" body=""
	I0210 12:25:31.468362    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:31.468362    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:31.468362    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:31.468362    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:31.472812    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:31.472906    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:31.472906    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:31.472906    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:31 GMT
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Audit-Id: 3feb5aa6-1208-49f6-af89-8ad0cfe8d7ee
	I0210 12:25:31.473159    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:31.968731    5644 type.go:168] "Request Body" body=""
	I0210 12:25:31.968731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:31.968731    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:31.968731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:31.968731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:31.976180    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:25:31.976180    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:31.976180    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:31.976180    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:31.976180    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:31 GMT
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Audit-Id: ff255298-da3f-474b-8ceb-a26688f92f1a
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:31.976793    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:31.976910    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:32.468260    5644 type.go:168] "Request Body" body=""
	I0210 12:25:32.468260    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:32.468260    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:32.468260    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:32.468260    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:32.473134    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:32.473237    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:32 GMT
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Audit-Id: 444e57c4-3937-426d-8509-866e57f75bc1
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:32.473237    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:32.473237    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:32.473503    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:32.968917    5644 type.go:168] "Request Body" body=""
	I0210 12:25:32.968917    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:32.968917    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:32.968917    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:32.968917    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:32.975300    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:32.975300    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:32 GMT
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Audit-Id: 4d3d5a16-c3fe-4bd3-9fff-775b24aa34af
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:32.975300    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:32.975300    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:32.975300    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:33.469901    5644 type.go:168] "Request Body" body=""
	I0210 12:25:33.469901    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:33.469901    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:33.469901    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:33.469901    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:33.473734    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:33.473734    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:33 GMT
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Audit-Id: 99326657-b3e1-4317-bb43-38d0f74eef4a
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:33.473734    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:33.473734    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:33.473734    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:33.968429    5644 type.go:168] "Request Body" body=""
	I0210 12:25:33.968429    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:33.968429    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:33.968429    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:33.968429    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:33.972810    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:33.972810    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Audit-Id: 5968d8f6-fee4-48de-bcfb-5a3477685d7e
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:33.972810    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:33.972810    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:33 GMT
	I0210 12:25:33.972810    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:34.468157    5644 type.go:168] "Request Body" body=""
	I0210 12:25:34.468157    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:34.468157    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:34.468157    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:34.468157    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:34.471700    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:34.471700    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:34 GMT
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Audit-Id: e0ea628a-a218-4ea5-a8ac-b3c767955de4
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:34.472642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:34.472642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:34.472900    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:34.473074    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:34.968504    5644 type.go:168] "Request Body" body=""
	I0210 12:25:34.968504    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:34.968504    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:34.968504    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:34.968504    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:34.972797    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:34.972797    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:34.972797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:34 GMT
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Audit-Id: edd22e2e-a071-49cf-847b-8645164df5ed
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:34.972797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:34.972797    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:35.468808    5644 type.go:168] "Request Body" body=""
	I0210 12:25:35.468808    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:35.468808    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:35.468808    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:35.468808    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:35.472943    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:35.473026    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:35.473026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:35.473026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:35 GMT
	I0210 12:25:35.473095    5644 round_trippers.go:587]     Audit-Id: 6eb2bcbc-3dcb-4bf2-8310-9bc6ce4f8e33
	I0210 12:25:35.473189    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:35.969161    5644 type.go:168] "Request Body" body=""
	I0210 12:25:35.969303    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:35.969303    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:35.969303    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:35.969303    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:35.972971    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:35.973055    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:35.973055    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:35.973055    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:35.973055    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:35.973055    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:35 GMT
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Audit-Id: f889ee21-9b3e-4dfe-a5fc-7a1b5e9503f7
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:35.973324    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:36.468222    5644 type.go:168] "Request Body" body=""
	I0210 12:25:36.469160    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:36.469160    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:36.469160    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:36.469160    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:36.475916    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:36.475916    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:36.475916    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:36.475916    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:36 GMT
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Audit-Id: 092f2b04-33f1-490c-b259-835c77776041
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:36.475916    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:36.475916    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:36.968598    5644 type.go:168] "Request Body" body=""
	I0210 12:25:36.968598    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:36.968598    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:36.968598    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:36.968598    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:36.971597    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:36.971597    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Audit-Id: 5a976e2e-734a-4e84-b513-f95752a0b998
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:36.971597    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:36.971597    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:36 GMT
	I0210 12:25:36.971597    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:37.468513    5644 type.go:168] "Request Body" body=""
	I0210 12:25:37.468855    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:37.468855    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:37.468855    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:37.468943    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:37.472136    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:37.472219    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:37.472219    5644 round_trippers.go:587]     Audit-Id: fbc98f06-918e-4376-b265-cf422f355dde
	I0210 12:25:37.472219    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:37.472298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:37.472298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:37 GMT
	I0210 12:25:37.472429    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:37.968936    5644 type.go:168] "Request Body" body=""
	I0210 12:25:37.969255    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:37.969255    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:37.969255    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:37.969255    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:37.973333    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:37.973333    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Audit-Id: 3a737390-82cb-49f5-b856-26eb0ce4591f
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:37.973333    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:37.973333    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:37 GMT
	I0210 12:25:37.973333    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.468113    5644 type.go:168] "Request Body" body=""
	I0210 12:25:38.468113    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:38.468113    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:38.468113    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:38.468113    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:38.472316    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:38.472316    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Audit-Id: 7150760b-047a-45bd-9768-f8b28cfbb768
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:38.472316    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:38.472316    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:38 GMT
	I0210 12:25:38.472611    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.968707    5644 type.go:168] "Request Body" body=""
	I0210 12:25:38.969108    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:38.969284    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:38.969284    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:38.969341    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:38.973203    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:38.973203    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:38.973203    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:38.973203    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:38 GMT
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Audit-Id: e1503f7d-d24e-437f-950d-20527a16cf58
	I0210 12:25:38.973203    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.973203    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:39.468644    5644 type.go:168] "Request Body" body=""
	I0210 12:25:39.469068    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:39.469146    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:39.469146    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:39.469146    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:39.472435    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:39.472435    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:39.472435    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:39 GMT
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Audit-Id: 72f95c0c-50dc-4102-9b67-ed24d17ec47a
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:39.472564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:39.472834    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:39.968210    5644 type.go:168] "Request Body" body=""
	I0210 12:25:39.968210    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:39.968210    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:39.968210    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:39.968210    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:39.972687    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:39.972793    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:39.972793    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:39.972793    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:39 GMT
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Audit-Id: 555096bf-1701-4e35-b4c8-18d985aa6672
	I0210 12:25:39.973055    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.469069    5644 type.go:168] "Request Body" body=""
	I0210 12:25:40.469069    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:40.469069    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:40.469069    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:40.469069    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:40.473465    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:40.474088    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:40.474136    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:40.474136    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:40 GMT
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Audit-Id: d81e68f2-9015-4087-885a-4081184acbcd
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:40.474162    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:40.474162    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:40.474162    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.968916    5644 type.go:168] "Request Body" body=""
	I0210 12:25:40.968916    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:40.968916    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:40.968916    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:40.968916    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:40.974360    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:40.974360    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:40.974360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:40 GMT
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Audit-Id: 91948b9e-e05a-4292-ab26-ae4450c54e2b
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:40.974360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:40.974360    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.974360    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:41.469327    5644 type.go:168] "Request Body" body=""
	I0210 12:25:41.469462    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:41.469462    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:41.469462    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:41.469462    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:41.473521    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:41.473573    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:41.473573    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:41.473606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:41.473606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:41 GMT
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Audit-Id: 4c090d5a-7ac2-4db4-ae8b-a72c7c769ded
	I0210 12:25:41.473870    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:41.968582    5644 type.go:168] "Request Body" body=""
	I0210 12:25:41.968582    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:41.968582    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:41.968582    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:41.968582    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:41.973019    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:41.973019    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:41 GMT
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Audit-Id: 54485fea-1bf0-4461-a108-b431ac8cf56d
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:41.973019    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:41.973019    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:41.973019    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:42.469185    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.469185    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:42.469185    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.469185    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.469185    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.474290    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:42.474372    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.474372    5644 round_trippers.go:587]     Audit-Id: 6874f952-1555-4448-9fb7-7c8ec6229517
	I0210 12:25:42.474372    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.474447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.474447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:42 GMT
	I0210 12:25:42.474697    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:42.968281    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.968645    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:42.968645    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.968645    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.968645    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.972898    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:42.972963    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Audit-Id: c4748307-b742-4d77-b863-2b8a72431791
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.973020    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.973020    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.973020    5644 round_trippers.go:587]     Content-Length: 3520
	I0210 12:25:42.973020    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:42 GMT
	I0210 12:25:42.973277    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a9 1b 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 36 37 38 00  |a254f32b2.21678.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16356 chars]
	 >
	I0210 12:25:42.973405    5644 node_ready.go:49] node "multinode-032400-m02" has status "Ready":"True"
	I0210 12:25:42.973405    5644 node_ready.go:38] duration metric: took 15.005332s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:25:42.973405    5644 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:25:42.973491    5644 type.go:204] "Request Body" body=""
	I0210 12:25:42.973556    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:25:42.973556    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.973556    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.973627    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.979691    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:42.979691    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.979691    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.979691    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Audit-Id: fa8f912b-2df1-4f6f-92df-58bc5b39c417
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.981851    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ea e9 03 0a  0a 0a 00 12 04 32 31 36  |ist..........216|
		00000020  38 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308724 chars]
	 >
	I0210 12:25:42.982555    5644 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.982555    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.982555    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:25:42.982555    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.982555    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.982555    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.985730    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.985730    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Audit-Id: 37517305-d14b-40b1-a6f6-9e4d707a3892
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.985730    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.985730    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.985730    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c5 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 39 37 32 38  |7dbe93e092.19728|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24725 chars]
	 >
	I0210 12:25:42.986731    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.986731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:42.986731    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.986731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.986731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.989755    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.989800    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.989800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Audit-Id: 28e681d0-4883-4b07-8113-92d25c9082de
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.989800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.990208    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:42.990384    5644 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:42.990384    5644 pod_ready.go:82] duration metric: took 7.8281ms for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.990425    5644 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.990499    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.990571    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:25:42.990603    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.990603    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.990603    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.992352    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:25:42.992352    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.992352    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Audit-Id: 5d3e1470-0d15-4d06-a8af-615f4c71ea0b
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.992352    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.992352    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  81 2c 0a 9f 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 37  30 38 00 42 08 08 e6 de  |be02.18708.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26933 chars]
	 >
	I0210 12:25:42.992352    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.992352    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:42.992352    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.992352    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.992352    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.996129    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.996129    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Audit-Id: 9840de94-cd18-4dff-bb82-8a149fd3cfe0
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.996211    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.996211    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.996211    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.996463    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:42.996608    5644 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:42.996632    5644 pod_ready.go:82] duration metric: took 6.1741ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.996632    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.996744    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.996744    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:25:42.996817    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.996817    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.996817    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.999005    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:42.999005    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.999005    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Audit-Id: 8d9132ed-ef43-49eb-ade8-43ae3c18157f
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.999464    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.999824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 35 0a af 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 36 36 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8668.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32856 chars]
	 >
	I0210 12:25:42.999959    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.000034    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.000034    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.000052    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.000088    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.002027    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:25:43.002027    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Audit-Id: 764e1440-a72c-442a-9b52-b86169ecb8ef
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.002027    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.002027    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.002027    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.002027    5644 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.002027    5644 pod_ready.go:82] duration metric: took 5.3953ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.002027    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.002027    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.002027    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:25:43.002027    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.002027    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.002027    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.005214    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:43.005214    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.005214    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.005214    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.005538    5644 round_trippers.go:587]     Audit-Id: 7ee4de1b-71cf-4355-8593-45069b93f763
	I0210 12:25:43.005810    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  df 31 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 38 32 38 00 42 08  |9fb4412.18828.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30565 chars]
	 >
	I0210 12:25:43.005998    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.006060    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.006060    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.006060    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.006124    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.008437    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:43.008526    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.008526    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.008526    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Audit-Id: d394fb82-5d3b-4969-abc1-f95d81c3f240
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.008720    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.008720    5644 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.008720    5644 pod_ready.go:82] duration metric: took 6.693ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.008720    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.008720    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.168958    5644 request.go:661] Waited for 160.2359ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:25:43.168958    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:25:43.168958    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.168958    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.168958    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.173132    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.173132    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.173132    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.173132    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Audit-Id: 712c075c-8954-41d5-9aa1-918e0bd9775e
	I0210 12:25:43.173132    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:25:43.173900    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.369401    5644 request.go:661] Waited for 195.4287ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.369401    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.369401    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.369401    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.369401    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.373555    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.373555    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Audit-Id: e0f074fb-e763-4414-9b1f-3cd7688c9edc
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.373555    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.373555    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.373555    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.373555    5644 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.373555    5644 pod_ready.go:82] duration metric: took 364.8304ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.373555    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.373555    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.568877    5644 request.go:661] Waited for 195.3203ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:25:43.569084    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:25:43.569084    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.569084    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.569084    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.573818    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.573818    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.573818    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Audit-Id: 12e60692-567a-4bb9-b87e-fc9f5e88f78f
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.573818    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.573818    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:25:43.574540    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.768931    5644 request.go:661] Waited for 194.3887ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:25:43.768931    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:25:43.768931    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.768931    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.768931    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.774161    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:43.774253    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.774253    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.774335    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.774355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.774355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Audit-Id: 7b52aa9a-de2b-43f8-93a1-e7960612a5dc
	I0210 12:25:43.774617    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:25:43.774784    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:25:43.774852    5644 pod_ready.go:82] duration metric: took 401.2929ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:25:43.774852    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:25:43.774921    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.775058    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.968879    5644 request.go:661] Waited for 193.7779ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:25:43.968879    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:25:43.968879    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.968879    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.968879    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.973860    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.974005    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.974077    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.974077    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Audit-Id: 4b5d66c8-083d-4e8c-8f15-62926090b727
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.974077    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ab 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 04 32 31 33 38 38  |0d435af832.21388|
		00000070  00 42 08 08 d0 d5 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 22740 chars]
	 >
	I0210 12:25:43.974782    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.169599    5644 request.go:661] Waited for 194.814ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:44.169599    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:44.169599    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.169599    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.169599    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.174050    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:44.174125    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.174125    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.174158    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.174158    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Content-Length: 3520
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Audit-Id: ec0dd215-6fe9-45b0-8feb-6cbb3e83bd31
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.174424    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a9 1b 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 36 37 38 00  |a254f32b2.21678.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16356 chars]
	 >
	I0210 12:25:44.174632    5644 pod_ready.go:93] pod "kube-proxy-xltxj" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:44.174632    5644 pod_ready.go:82] duration metric: took 399.7062ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.174632    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.174801    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.368937    5644 request.go:661] Waited for 194.1346ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:25:44.368937    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:25:44.368937    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.368937    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.368937    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.374302    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:44.374457    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.374457    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.374457    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Audit-Id: b0045d61-8652-4a0d-9d67-7a5b83b426d6
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.374725    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ea 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 37 38 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8788.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21728 chars]
	 >
	I0210 12:25:44.374994    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.569521    5644 request.go:661] Waited for 194.5248ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:44.569521    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:44.569521    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.569521    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.569521    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.573300    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:44.574228    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.574228    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.574228    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Audit-Id: 37a316d4-24c6-4f51-8f41-8096cf64635e
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.574495    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:44.574734    5644 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:44.574734    5644 pod_ready.go:82] duration metric: took 400.0976ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.574734    5644 pod_ready.go:39] duration metric: took 1.6013107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:25:44.574842    5644 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:25:44.583512    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:25:44.609128    5644 system_svc.go:56] duration metric: took 34.2859ms WaitForService to wait for kubelet
	I0210 12:25:44.609218    5644 kubeadm.go:582] duration metric: took 16.877264s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:25:44.609218    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:25:44.609400    5644 type.go:204] "Request Body" body=""
	I0210 12:25:44.769309    5644 request.go:661] Waited for 159.9073ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes
	I0210 12:25:44.769309    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:25:44.769309    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.769309    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.769309    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.773865    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:44.774474    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.774474    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Audit-Id: b58c6e28-e6ee-4252-ac75-5be0122d32fb
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.774474    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.774888    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 a6 5e 0a  0a 0a 00 12 04 32 31 37  |List..^......217|
		00000020  31 1a 00 12 d4 24 0a f8  11 0a 10 6d 75 6c 74 69  |1....$.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 39 33 35 38 00 42  |1e01b262.19358.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58764 chars]
	 >
	I0210 12:25:44.775560    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775560    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775560    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775560    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775661    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775661    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775661    5644 node_conditions.go:105] duration metric: took 166.4411ms to run NodePressure ...
	I0210 12:25:44.775661    5644 start.go:241] waiting for startup goroutines ...
	I0210 12:25:44.775661    5644 start.go:255] writing updated cluster config ...
	I0210 12:25:44.779351    5644 out.go:201] 
	I0210 12:25:44.782404    5644 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:44.795786    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:44.795786    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:25:44.800976    5644 out.go:177] * Starting "multinode-032400-m03" worker node in "multinode-032400" cluster
	I0210 12:25:44.802546    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:25:44.802546    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:25:44.802546    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:25:44.803513    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:25:44.803513    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:25:44.812171    5644 start.go:360] acquireMachinesLock for multinode-032400-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:25:44.812171    5644 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-032400-m03"
	I0210 12:25:44.813242    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:25:44.813242    5644 fix.go:54] fixHost starting: m03
	I0210 12:25:44.813346    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:46.765529    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:25:46.765529    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:46.765529    5644 fix.go:112] recreateIfNeeded on multinode-032400-m03: state=Stopped err=<nil>
	W0210 12:25:46.765529    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:25:46.768334    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400-m03" ...
	I0210 12:25:46.770478    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400-m03
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:49.608342    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:51.683388    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:51.683554    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:51.683616    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:54.029881    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:54.029881    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:55.030941    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:59.312671    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:59.313149    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:26:00.313515    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-032400" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-032400
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-032400: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-032400" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-032400	172.29.136.201
multinode-032400-m02	172.29.143.51
multinode-032400-m03	172.29.129.10

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-032400 -n multinode-032400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-032400 -n multinode-032400: (11.1789991s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 logs -n 25: (13.0639881s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-032400 cp testdata\cp-test.txt                                                                                 | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:10 UTC | 10 Feb 25 12:11 UTC |
	|         | multinode-032400-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:11 UTC |
	|         | multinode-032400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:11 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:11 UTC |
	|         | multinode-032400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:11 UTC |
	|         | multinode-032400:/home/docker/cp-test_multinode-032400-m02_multinode-032400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:11 UTC |
	|         | multinode-032400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n multinode-032400 sudo cat                                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:11 UTC | 10 Feb 25 12:12 UTC |
	|         | /home/docker/cp-test_multinode-032400-m02_multinode-032400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:12 UTC |
	|         | multinode-032400-m03:/home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:12 UTC |
	|         | multinode-032400-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n multinode-032400-m03 sudo cat                                                                    | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:12 UTC |
	|         | /home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp testdata\cp-test.txt                                                                                 | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:12 UTC |
	|         | multinode-032400-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:12 UTC |
	|         | multinode-032400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:12 UTC | 10 Feb 25 12:13 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:13 UTC |
	|         | multinode-032400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:13 UTC |
	|         | multinode-032400:/home/docker/cp-test_multinode-032400-m03_multinode-032400.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:13 UTC |
	|         | multinode-032400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n multinode-032400 sudo cat                                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:13 UTC |
	|         | /home/docker/cp-test_multinode-032400-m03_multinode-032400.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt                                                        | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:13 UTC |
	|         | multinode-032400-m02:/home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n                                                                                                  | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:13 UTC | 10 Feb 25 12:14 UTC |
	|         | multinode-032400-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-032400 ssh -n multinode-032400-m02 sudo cat                                                                    | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:14 UTC | 10 Feb 25 12:14 UTC |
	|         | /home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-032400 node stop m03                                                                                           | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:14 UTC | 10 Feb 25 12:14 UTC |
	| node    | multinode-032400 node start                                                                                              | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:15 UTC | 10 Feb 25 12:17 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-032400                                                                                                 | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:18 UTC |                     |
	| stop    | -p multinode-032400                                                                                                      | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:18 UTC | 10 Feb 25 12:19 UTC |
	| start   | -p multinode-032400                                                                                                      | multinode-032400 | minikube5\jenkins | v1.35.0 | 10 Feb 25 12:19 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 12:19:57
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 12:19:57.578884    5644 out.go:345] Setting OutFile to fd 1764 ...
	I0210 12:19:57.631465    5644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:19:57.631465    5644 out.go:358] Setting ErrFile to fd 780...
	I0210 12:19:57.631465    5644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:19:57.650332    5644 out.go:352] Setting JSON to false
	I0210 12:19:57.653542    5644 start.go:129] hostinfo: {"hostname":"minikube5","uptime":191337,"bootTime":1738998660,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 12:19:57.653542    5644 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 12:19:57.707113    5644 out.go:177] * [multinode-032400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 12:19:57.720802    5644 notify.go:220] Checking for updates...
	I0210 12:19:57.763178    5644 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:19:57.777975    5644 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 12:19:57.807721    5644 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 12:19:57.821042    5644 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 12:19:57.844719    5644 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 12:19:57.863282    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:19:57.863581    5644 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 12:20:03.010019    5644 out.go:177] * Using the hyperv driver based on existing profile
	I0210 12:20:03.063199    5644 start.go:297] selected driver: hyperv
	I0210 12:20:03.063199    5644 start.go:901] validating driver "hyperv" against &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:20:03.063582    5644 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0210 12:20:03.121424    5644 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:20:03.121424    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:20:03.121424    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:20:03.122045    5644 start.go:340] cluster config:
	{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.136.201 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:20:03.122045    5644 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 12:20:03.280775    5644 out.go:177] * Starting "multinode-032400" primary control-plane node in "multinode-032400" cluster
	I0210 12:20:03.311514    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:20:03.311960    5644 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0210 12:20:03.311960    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:20:03.312450    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:20:03.312630    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:20:03.312630    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:20:03.314913    5644 start.go:360] acquireMachinesLock for multinode-032400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:20:03.315112    5644 start.go:364] duration metric: took 123.6µs to acquireMachinesLock for "multinode-032400"
	I0210 12:20:03.315200    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:20:03.315200    5644 fix.go:54] fixHost starting: 
	I0210 12:20:03.315907    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:05.914777    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:20:05.915831    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:05.915897    5644 fix.go:112] recreateIfNeeded on multinode-032400: state=Stopped err=<nil>
	W0210 12:20:05.915897    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:20:05.928203    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400" ...
	I0210 12:20:05.960927    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:08.807587    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:20:08.807587    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:10.852616    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:10.853048    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:10.853232    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:13.163004    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:13.163004    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:14.164072    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:16.139165    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:16.139565    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:16.139565    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:18.443931    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:18.443931    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:19.446011    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:21.451344    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:21.451732    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:21.451732    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:23.783277    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:23.783338    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:24.783517    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:26.785238    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:26.785238    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:26.785295    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:29.062641    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:20:29.062719    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:30.063394    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:32.026713    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:32.026713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:32.027019    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:34.495276    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:34.495276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:34.497278    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:36.475136    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:38.802589    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:38.802589    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:38.803140    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:20:38.804948    5644 machine.go:93] provisionDockerMachine start ...
	I0210 12:20:38.805050    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:40.747623    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:43.047020    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:43.047020    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:43.051439    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:43.051439    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:43.052013    5644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:20:43.192843    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 12:20:43.192843    5644 buildroot.go:166] provisioning hostname "multinode-032400"
	I0210 12:20:43.192843    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:45.142944    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:45.142944    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:45.143198    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:47.456601    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:47.456601    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:47.460733    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:47.460733    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:47.460733    5644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400 && echo "multinode-032400" | sudo tee /etc/hostname
	I0210 12:20:47.636991    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400
	
	I0210 12:20:47.636991    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:49.588695    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:49.589077    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:49.589152    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:51.921453    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:51.921453    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:51.925341    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:20:51.925823    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:20:51.925823    5644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:20:52.083308    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:20:52.083417    5644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 12:20:52.083417    5644 buildroot.go:174] setting up certificates
	I0210 12:20:52.083550    5644 provision.go:84] configureAuth start
	I0210 12:20:52.083550    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:54.063570    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:54.064485    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:54.064485    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:20:56.374737    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:20:56.375309    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:56.375404    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:20:58.325938    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:20:58.325938    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:20:58.326886    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:00.674152    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:00.674867    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:00.674867    5644 provision.go:143] copyHostCerts
	I0210 12:21:00.675016    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 12:21:00.675090    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 12:21:00.675090    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 12:21:00.675090    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 12:21:00.676388    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 12:21:00.676560    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 12:21:00.676560    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 12:21:00.676796    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 12:21:00.677631    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 12:21:00.677785    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 12:21:00.677864    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 12:21:00.678113    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 12:21:00.678940    5644 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400 san=[127.0.0.1 172.29.129.181 localhost minikube multinode-032400]
	I0210 12:21:00.904994    5644 provision.go:177] copyRemoteCerts
	I0210 12:21:00.912869    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:21:00.912869    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:02.845039    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:02.845039    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:02.845703    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:05.162268    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:05.163187    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:05.163781    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:05.271361    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3584436s)
	I0210 12:21:05.271481    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 12:21:05.271636    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:21:05.318273    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 12:21:05.318273    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0210 12:21:05.364194    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 12:21:05.364637    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0210 12:21:05.408966    5644 provision.go:87] duration metric: took 13.3252675s to configureAuth
	I0210 12:21:05.409045    5644 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:21:05.409759    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:21:05.409818    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:07.365428    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:07.365428    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:07.366119    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:09.714377    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:09.714377    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:09.718506    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:09.718893    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:09.718893    5644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 12:21:09.854166    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 12:21:09.854231    5644 buildroot.go:70] root file system type: tmpfs
	I0210 12:21:09.854404    5644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 12:21:09.854467    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:11.808474    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:11.808474    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:11.809408    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:14.161928    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:14.162319    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:14.165955    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:14.166640    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:14.166640    5644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 12:21:14.333386    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 12:21:14.334268    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:16.282642    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:16.282642    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:16.282741    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:18.624134    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:18.624134    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:18.629267    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:18.629645    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:18.629645    5644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 12:21:21.134811    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 12:21:21.134811    5644 machine.go:96] duration metric: took 42.329393s to provisionDockerMachine
	I0210 12:21:21.134811    5644 start.go:293] postStartSetup for "multinode-032400" (driver="hyperv")
	I0210 12:21:21.134811    5644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:21:21.143069    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:21:21.143069    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:23.117764    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:23.117870    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:23.117870    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:25.439954    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:25.440879    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:25.440879    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:25.561375    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4182566s)
	I0210 12:21:25.569498    5644 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:21:25.576494    5644 command_runner.go:130] > NAME=Buildroot
	I0210 12:21:25.576494    5644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 12:21:25.576494    5644 command_runner.go:130] > ID=buildroot
	I0210 12:21:25.576494    5644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 12:21:25.576494    5644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 12:21:25.576494    5644 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:21:25.576494    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 12:21:25.577114    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 12:21:25.577230    5644 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 12:21:25.577668    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 12:21:25.586169    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 12:21:25.604342    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 12:21:25.649490    5644 start.go:296] duration metric: took 4.514593s for postStartSetup
	I0210 12:21:25.649626    5644 fix.go:56] duration metric: took 1m22.3335121s for fixHost
	I0210 12:21:25.649667    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:27.613655    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:27.614667    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:27.614822    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:29.966101    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:29.966101    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:29.969670    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:29.970260    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:29.970260    5644 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:21:30.105160    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739190090.106974362
	
	I0210 12:21:30.105160    5644 fix.go:216] guest clock: 1739190090.106974362
	I0210 12:21:30.105160    5644 fix.go:229] Guest: 2025-02-10 12:21:30.106974362 +0000 UTC Remote: 2025-02-10 12:21:25.6496267 +0000 UTC m=+88.153616101 (delta=4.457347662s)
	I0210 12:21:30.105160    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:32.052629    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:32.052629    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:32.053609    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:34.387515    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:34.388577    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:34.392418    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:21:34.393026    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.129.181 22 <nil> <nil>}
	I0210 12:21:34.393026    5644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739190090
	I0210 12:21:34.548507    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 12:21:30 UTC 2025
	
	I0210 12:21:34.548507    5644 fix.go:236] clock set: Mon Feb 10 12:21:30 UTC 2025
	 (err=<nil>)
	I0210 12:21:34.548507    5644 start.go:83] releasing machines lock for "multinode-032400", held for 1m31.2322944s
	I0210 12:21:34.548507    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:36.486302    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:36.486565    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:36.486565    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:38.812615    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:38.812615    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:38.816072    5644 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 12:21:38.816215    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:38.824299    5644 ssh_runner.go:195] Run: cat /version.json
	I0210 12:21:38.824299    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:40.776276    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:21:43.165463    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:43.165463    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:43.166320    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:43.185488    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:21:43.185488    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:21:43.185488    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:21:43.262831    5644 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0210 12:21:43.262831    5644 ssh_runner.go:235] Completed: cat /version.json: (4.4384829s)
	I0210 12:21:43.270240    5644 ssh_runner.go:195] Run: systemctl --version
	I0210 12:21:43.275956    5644 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 12:21:43.275956    5644 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.4598348s)
	W0210 12:21:43.275956    5644 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 12:21:43.283755    5644 command_runner.go:130] > systemd 252 (252)
	I0210 12:21:43.283755    5644 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0210 12:21:43.293242    5644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:21:43.301351    5644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0210 12:21:43.301883    5644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:21:43.310011    5644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:21:43.337342    5644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 12:21:43.337794    5644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:21:43.337794    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:21:43.338053    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:21:43.371079    5644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 12:21:43.379856    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 12:21:43.387359    5644 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 12:21:43.387359    5644 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 12:21:43.408371    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:21:43.429852    5644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:21:43.441849    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:21:43.478337    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:21:43.507578    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:21:43.536429    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:21:43.566958    5644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:21:43.595675    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:21:43.623687    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:21:43.651529    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:21:43.677590    5644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:21:43.695433    5644 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:21:43.695510    5644 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:21:43.703726    5644 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:21:43.732726    5644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:21:43.762380    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:43.946917    5644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:21:43.976787    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:21:43.986197    5644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 12:21:44.012344    5644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 12:21:44.012344    5644 command_runner.go:130] > [Unit]
	I0210 12:21:44.012344    5644 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 12:21:44.012344    5644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 12:21:44.012344    5644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 12:21:44.012344    5644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 12:21:44.012344    5644 command_runner.go:130] > StartLimitBurst=3
	I0210 12:21:44.012344    5644 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 12:21:44.012344    5644 command_runner.go:130] > [Service]
	I0210 12:21:44.012344    5644 command_runner.go:130] > Type=notify
	I0210 12:21:44.012344    5644 command_runner.go:130] > Restart=on-failure
	I0210 12:21:44.012344    5644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 12:21:44.012883    5644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 12:21:44.012883    5644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 12:21:44.012883    5644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 12:21:44.012883    5644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 12:21:44.012883    5644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 12:21:44.012883    5644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 12:21:44.012996    5644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 12:21:44.012996    5644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 12:21:44.012996    5644 command_runner.go:130] > ExecStart=
	I0210 12:21:44.012996    5644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 12:21:44.013084    5644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 12:21:44.013084    5644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitNOFILE=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitNPROC=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > LimitCORE=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 12:21:44.013084    5644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 12:21:44.013084    5644 command_runner.go:130] > TasksMax=infinity
	I0210 12:21:44.013084    5644 command_runner.go:130] > TimeoutStartSec=0
	I0210 12:21:44.013084    5644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 12:21:44.013084    5644 command_runner.go:130] > Delegate=yes
	I0210 12:21:44.013084    5644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 12:21:44.013084    5644 command_runner.go:130] > KillMode=process
	I0210 12:21:44.013084    5644 command_runner.go:130] > [Install]
	I0210 12:21:44.013084    5644 command_runner.go:130] > WantedBy=multi-user.target
	I0210 12:21:44.022094    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:21:44.053114    5644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:21:44.090425    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:21:44.121358    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:21:44.152819    5644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:21:44.210949    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:21:44.234437    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:21:44.266558    5644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 12:21:44.277138    5644 ssh_runner.go:195] Run: which cri-dockerd
	I0210 12:21:44.282708    5644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 12:21:44.292452    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 12:21:44.311196    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 12:21:44.350600    5644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 12:21:44.544376    5644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 12:21:44.749724    5644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 12:21:44.749724    5644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 12:21:44.790452    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:44.984206    5644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 12:21:47.653313    5644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6690775s)
	I0210 12:21:47.662643    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 12:21:47.693320    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:21:47.724728    5644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 12:21:47.920192    5644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 12:21:48.097241    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:48.282606    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 12:21:48.320811    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:21:48.353054    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:48.546204    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 12:21:48.652185    5644 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 12:21:48.662453    5644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 12:21:48.671127    5644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 12:21:48.671173    5644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 12:21:48.671205    5644 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0210 12:21:48.671205    5644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 12:21:48.671205    5644 command_runner.go:130] > Access: 2025-02-10 12:21:48.585360210 +0000
	I0210 12:21:48.671205    5644 command_runner.go:130] > Modify: 2025-02-10 12:21:48.585360210 +0000
	I0210 12:21:48.671205    5644 command_runner.go:130] > Change: 2025-02-10 12:21:48.588360354 +0000
	I0210 12:21:48.671264    5644 command_runner.go:130] >  Birth: -
	I0210 12:21:48.671298    5644 start.go:563] Will wait 60s for crictl version
	I0210 12:21:48.678779    5644 ssh_runner.go:195] Run: which crictl
	I0210 12:21:48.685382    5644 command_runner.go:130] > /usr/bin/crictl
	I0210 12:21:48.695251    5644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:21:48.751805    5644 command_runner.go:130] > Version:  0.1.0
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeName:  docker
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 12:21:48.751805    5644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 12:21:48.751896    5644 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 12:21:48.758474    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:21:48.791714    5644 command_runner.go:130] > 27.4.0
	I0210 12:21:48.802060    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:21:48.836905    5644 command_runner.go:130] > 27.4.0
	I0210 12:21:48.838600    5644 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 12:21:48.839975    5644 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 12:21:48.843691    5644 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 12:21:48.846104    5644 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 12:21:48.846104    5644 ip.go:214] interface addr: 172.29.128.1/20
	I0210 12:21:48.853658    5644 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 12:21:48.860206    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:21:48.881782    5644 kubeadm.go:883] updating cluster {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fal
se istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0210 12:21:48.882095    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:21:48.889611    5644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 12:21:48.913218    5644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0210 12:21:48.914239    5644 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0210 12:21:48.914239    5644 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0210 12:21:48.914239    5644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:21:48.914239    5644 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0210 12:21:48.914239    5644 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0210 12:21:48.914239    5644 docker.go:619] Images already preloaded, skipping extraction
	I0210 12:21:48.921891    5644 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0210 12:21:48.947204    5644 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0210 12:21:48.947204    5644 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0210 12:21:48.947293    5644 command_runner.go:130] > kindest/kindnetd:v20241212-9f82dd49
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0210 12:21:48.947293    5644 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0210 12:21:48.947293    5644 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0210 12:21:48.947293    5644 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0210 12:21:48.947293    5644 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241212-9f82dd49
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0210 12:21:48.947434    5644 cache_images.go:84] Images are preloaded, skipping loading
	I0210 12:21:48.947469    5644 kubeadm.go:934] updating node { 172.29.129.181 8443 v1.32.1 docker true true} ...
	I0210 12:21:48.947678    5644 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.129.181
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:21:48.956603    5644 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0210 12:21:49.019097    5644 command_runner.go:130] > cgroupfs
	I0210 12:21:49.021088    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:21:49.021189    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:21:49.021189    5644 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0210 12:21:49.021189    5644 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.129.181 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-032400 NodeName:multinode-032400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.129.181"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.129.181 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0210 12:21:49.021471    5644 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.129.181
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-032400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.129.181"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0210 12:21:49.030818    5644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:21:49.059023    5644 command_runner.go:130] > kubeadm
	I0210 12:21:49.059112    5644 command_runner.go:130] > kubectl
	I0210 12:21:49.059112    5644 command_runner.go:130] > kubelet
	I0210 12:21:49.059213    5644 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:21:49.066897    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0210 12:21:49.084845    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0210 12:21:49.115566    5644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:21:49.144925    5644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0210 12:21:49.185214    5644 ssh_runner.go:195] Run: grep 172.29.129.181	control-plane.minikube.internal$ /etc/hosts
	I0210 12:21:49.191138    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.129.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:21:49.220877    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:21:49.414971    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:21:49.442504    5644 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.129.181
	I0210 12:21:49.442504    5644 certs.go:194] generating shared ca certs ...
	I0210 12:21:49.442504    5644 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.444000    5644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 12:21:49.444390    5644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 12:21:49.444514    5644 certs.go:256] generating profile certs ...
	I0210 12:21:49.445114    5644 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\client.key
	I0210 12:21:49.445222    5644 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d
	I0210 12:21:49.445222    5644 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.129.181]
	I0210 12:21:49.625501    5644 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d ...
	I0210 12:21:49.625501    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d: {Name:mkdf52c332ce3be44472e32ef1425e0bace63214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.627403    5644 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d ...
	I0210 12:21:49.627403    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d: {Name:mk37c561ceb16c113cacfa4d153c64399d5339b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:49.628394    5644 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt.031eff2d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt
	I0210 12:21:49.644387    5644 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key.031eff2d -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key
	I0210 12:21:49.644841    5644 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key
	I0210 12:21:49.644841    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 12:21:49.644841    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 12:21:49.645725    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 12:21:49.645725    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 12:21:49.645860    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0210 12:21:49.646019    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0210 12:21:49.646243    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0210 12:21:49.646890    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0210 12:21:49.647069    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 12:21:49.647495    5644 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 12:21:49.647495    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 12:21:49.647904    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:49.647904    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 12:21:49.649214    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:21:49.700052    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:21:49.745099    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:21:49.789120    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:21:49.837845    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0210 12:21:49.883491    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0210 12:21:49.930844    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0210 12:21:49.976563    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0210 12:21:50.022268    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 12:21:50.071783    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:21:50.116006    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 12:21:50.158805    5644 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0210 12:21:50.203279    5644 ssh_runner.go:195] Run: openssl version
	I0210 12:21:50.212022    5644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 12:21:50.220408    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 12:21:50.248617    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.254174    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.254174    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.262376    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 12:21:50.270644    5644 command_runner.go:130] > 3ec20f2e
	I0210 12:21:50.279601    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 12:21:50.305282    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:21:50.332409    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.339591    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.339633    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.347869    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:21:50.356371    5644 command_runner.go:130] > b5213941
	I0210 12:21:50.364087    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:21:50.391358    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 12:21:50.419761    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.426923    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.426923    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.435416    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 12:21:50.444020    5644 command_runner.go:130] > 51391683
	I0210 12:21:50.453867    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 12:21:50.480683    5644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:21:50.488096    5644 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:21:50.488096    5644 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0210 12:21:50.488096    5644 command_runner.go:130] > Device: 8,1	Inode: 531041      Links: 1
	I0210 12:21:50.488096    5644 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0210 12:21:50.488096    5644 command_runner.go:130] > Access: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] > Modify: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] > Change: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.488096    5644 command_runner.go:130] >  Birth: 2025-02-10 11:58:50.702339952 +0000
	I0210 12:21:50.496570    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0210 12:21:50.507079    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.514991    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0210 12:21:50.525926    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.533676    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0210 12:21:50.543630    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.550804    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0210 12:21:50.560642    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.568582    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0210 12:21:50.577753    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.585647    5644 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0210 12:21:50.594838    5644 command_runner.go:130] > Certificate will not expire
	I0210 12:21:50.594838    5644 kubeadm.go:392] StartCluster: {Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.143.51 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:21:50.601478    5644 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 12:21:50.639879    5644 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0210 12:21:50.658306    5644 command_runner.go:130] > /var/lib/minikube/etcd:
	I0210 12:21:50.658306    5644 command_runner.go:130] > member
	I0210 12:21:50.658306    5644 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0210 12:21:50.658306    5644 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0210 12:21:50.666853    5644 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0210 12:21:50.684872    5644 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:21:50.684872    5644 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-032400" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:21:50.686528    5644 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-032400" cluster setting kubeconfig missing "multinode-032400" context setting]
	I0210 12:21:50.687399    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:21:50.705569    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:21:50.706175    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:21:50.707333    5644 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 12:21:50.707447    5644 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0210 12:21:50.707550    5644 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0210 12:21:50.716230    5644 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0210 12:21:50.734963    5644 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0210 12:21:50.734963    5644 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0210 12:21:50.734963    5644 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0210 12:21:50.734963    5644 command_runner.go:130] >  kind: InitConfiguration
	I0210 12:21:50.734963    5644 command_runner.go:130] >  localAPIEndpoint:
	I0210 12:21:50.734963    5644 command_runner.go:130] > -  advertiseAddress: 172.29.136.201
	I0210 12:21:50.734963    5644 command_runner.go:130] > +  advertiseAddress: 172.29.129.181
	I0210 12:21:50.734963    5644 command_runner.go:130] >    bindPort: 8443
	I0210 12:21:50.734963    5644 command_runner.go:130] >  bootstrapTokens:
	I0210 12:21:50.734963    5644 command_runner.go:130] >    - groups:
	I0210 12:21:50.734963    5644 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0210 12:21:50.734963    5644 command_runner.go:130] >    name: "multinode-032400"
	I0210 12:21:50.734963    5644 command_runner.go:130] >    kubeletExtraArgs:
	I0210 12:21:50.734963    5644 command_runner.go:130] >      - name: "node-ip"
	I0210 12:21:50.734963    5644 command_runner.go:130] > -      value: "172.29.136.201"
	I0210 12:21:50.734963    5644 command_runner.go:130] > +      value: "172.29.129.181"
	I0210 12:21:50.734963    5644 command_runner.go:130] >    taints: []
	I0210 12:21:50.734963    5644 command_runner.go:130] >  ---
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0210 12:21:50.734963    5644 command_runner.go:130] >  kind: ClusterConfiguration
	I0210 12:21:50.734963    5644 command_runner.go:130] >  apiServer:
	I0210 12:21:50.734963    5644 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.136.201"]
	I0210 12:21:50.734963    5644 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	I0210 12:21:50.734963    5644 command_runner.go:130] >    extraArgs:
	I0210 12:21:50.734963    5644 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0210 12:21:50.734963    5644 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0210 12:21:50.734963    5644 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.136.201
	+  advertiseAddress: 172.29.129.181
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-032400"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.29.136.201"
	+      value: "172.29.129.181"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.136.201"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.129.181"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0210 12:21:50.734963    5644 kubeadm.go:1160] stopping kube-system containers ...
	I0210 12:21:50.742932    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0210 12:21:50.772146    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:21:50.772146    5644 command_runner.go:130] > 182c8395f5e1
	I0210 12:21:50.772146    5644 command_runner.go:130] > 794995bca6b5
	I0210 12:21:50.772146    5644 command_runner.go:130] > 4ccc0a4e7b5c
	I0210 12:21:50.772146    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:21:50.772146    5644 command_runner.go:130] > 148309413de8
	I0210 12:21:50.772146    5644 command_runner.go:130] > 26d9e119a02c
	I0210 12:21:50.772146    5644 command_runner.go:130] > a70f430921ec
	I0210 12:21:50.772146    5644 command_runner.go:130] > 9f1c4e9b3353
	I0210 12:21:50.772146    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:21:50.772146    5644 command_runner.go:130] > 3ae31c3c37c9
	I0210 12:21:50.772146    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:21:50.772146    5644 command_runner.go:130] > 8c55184f16cc
	I0210 12:21:50.772146    5644 command_runner.go:130] > d33433fbce48
	I0210 12:21:50.772146    5644 command_runner.go:130] > b2de8e426f22
	I0210 12:21:50.772146    5644 command_runner.go:130] > ee16b295f58d
	I0210 12:21:50.772146    5644 docker.go:483] Stopping containers: [c5b854dbb912 182c8395f5e1 794995bca6b5 4ccc0a4e7b5c 4439940fa5f4 148309413de8 26d9e119a02c a70f430921ec 9f1c4e9b3353 adf520f9b9d7 3ae31c3c37c9 9408ce83d7d3 8c55184f16cc d33433fbce48 b2de8e426f22 ee16b295f58d]
	I0210 12:21:50.778640    5644 ssh_runner.go:195] Run: docker stop c5b854dbb912 182c8395f5e1 794995bca6b5 4ccc0a4e7b5c 4439940fa5f4 148309413de8 26d9e119a02c a70f430921ec 9f1c4e9b3353 adf520f9b9d7 3ae31c3c37c9 9408ce83d7d3 8c55184f16cc d33433fbce48 b2de8e426f22 ee16b295f58d
	I0210 12:21:50.808074    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:21:50.808074    5644 command_runner.go:130] > 182c8395f5e1
	I0210 12:21:50.808074    5644 command_runner.go:130] > 794995bca6b5
	I0210 12:21:50.808074    5644 command_runner.go:130] > 4ccc0a4e7b5c
	I0210 12:21:50.808074    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:21:50.808074    5644 command_runner.go:130] > 148309413de8
	I0210 12:21:50.808074    5644 command_runner.go:130] > 26d9e119a02c
	I0210 12:21:50.808074    5644 command_runner.go:130] > a70f430921ec
	I0210 12:21:50.808074    5644 command_runner.go:130] > 9f1c4e9b3353
	I0210 12:21:50.808074    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:21:50.808074    5644 command_runner.go:130] > 3ae31c3c37c9
	I0210 12:21:50.808074    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:21:50.808074    5644 command_runner.go:130] > 8c55184f16cc
	I0210 12:21:50.808074    5644 command_runner.go:130] > d33433fbce48
	I0210 12:21:50.808074    5644 command_runner.go:130] > b2de8e426f22
	I0210 12:21:50.808074    5644 command_runner.go:130] > ee16b295f58d
	I0210 12:21:50.817248    5644 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0210 12:21:50.853993    5644 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0210 12:21:50.872453    5644 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:21:50.872453    5644 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0210 12:21:50.872453    5644 kubeadm.go:157] found existing configuration files:
	
	I0210 12:21:50.880551    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0210 12:21:50.896516    5644 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:21:50.896516    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0210 12:21:50.904651    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0210 12:21:50.929191    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0210 12:21:50.945835    5644 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:21:50.945835    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0210 12:21:50.954114    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0210 12:21:50.978574    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0210 12:21:50.994849    5644 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:21:50.994946    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0210 12:21:51.002633    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0210 12:21:51.028334    5644 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0210 12:21:51.044747    5644 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:21:51.044747    5644 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0210 12:21:51.052847    5644 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0210 12:21:51.079578    5644 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0210 12:21:51.095962    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0210 12:21:51.309867    5644 command_runner.go:130] > [certs] Using the existing "sa" key
	I0210 12:21:51.309867    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.401550    5644 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0210 12:21:52.401676    5644 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0210 12:21:52.401737    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0918579s)
	I0210 12:21:52.401788    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:21:52.702444    5644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 12:21:52.702444    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.802951    5644 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0210 12:21:52.802951    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0210 12:21:52.803032    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0210 12:21:52.803032    5644 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0210 12:21:52.803070    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:21:52.911975    5644 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0210 12:21:52.911975    5644 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:21:52.921405    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:53.420837    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:53.919851    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.421858    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.921858    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:21:54.944506    5644 command_runner.go:130] > 2008
	I0210 12:21:54.944576    5644 api_server.go:72] duration metric: took 2.032508s to wait for apiserver process to appear ...
	I0210 12:21:54.944576    5644 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:21:54.944640    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.084515    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 12:21:58.084515    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 12:21:58.084680    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.114604    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0210 12:21:58.114604    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0210 12:21:58.444742    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.453115    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:58.453115    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:58.945835    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:58.956371    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:58.956442    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:59.444818    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:59.457330    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0210 12:21:59.457330    5644 api_server.go:103] status: https://172.29.129.181:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0210 12:21:59.945197    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:21:59.954969    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:21:59.955161    5644 discovery_client.go:658] "Request Body" body=""
	I0210 12:21:59.955201    5644 round_trippers.go:470] GET https://172.29.129.181:8443/version
	I0210 12:21:59.955272    5644 round_trippers.go:476] Request Headers:
	I0210 12:21:59.955300    5644 round_trippers.go:480]     Accept: application/json, */*
	I0210 12:21:59.955300    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:21:59.966409    5644 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 12:21:59.966409    5644 round_trippers.go:584] Response Headers:
	I0210 12:21:59.966409    5644 round_trippers.go:587]     Content-Length: 263
	I0210 12:21:59.966409    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:21:59 GMT
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Audit-Id: 5c48a883-3089-4412-89ce-073752a34ebe
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:21:59.966481    5644 round_trippers.go:587]     Content-Type: application/json
	I0210 12:21:59.966481    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:21:59.966481    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:21:59.966537    5644 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.1",
		  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
		  "gitTreeState": "clean",
		  "buildDate": "2025-01-15T14:31:55Z",
		  "goVersion": "go1.23.4",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0210 12:21:59.966537    5644 api_server.go:141] control plane version: v1.32.1
	I0210 12:21:59.966537    5644 api_server.go:131] duration metric: took 5.0219059s to wait for apiserver health ...
	I0210 12:21:59.966537    5644 cni.go:84] Creating CNI manager for ""
	I0210 12:21:59.966537    5644 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0210 12:21:59.969769    5644 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0210 12:21:59.979853    5644 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0210 12:21:59.987643    5644 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0210 12:21:59.987643    5644 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0210 12:21:59.987643    5644 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0210 12:21:59.987643    5644 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0210 12:21:59.987643    5644 command_runner.go:130] > Access: 2025-02-10 12:20:34.686796900 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] > Change: 2025-02-10 12:20:23.050000000 +0000
	I0210 12:21:59.987643    5644 command_runner.go:130] >  Birth: -
	I0210 12:21:59.987643    5644 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0210 12:21:59.987643    5644 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0210 12:22:00.034391    5644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0210 12:22:01.017263    5644 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0210 12:22:01.017325    5644 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0210 12:22:01.017325    5644 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0210 12:22:01.017392    5644 command_runner.go:130] > daemonset.apps/kindnet configured
	I0210 12:22:01.017442    5644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:22:01.017797    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.017903    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:01.017940    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.017940    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.017940    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.024267    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:01.024622    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Audit-Id: 7133dc72-e0e9-491b-9795-b0fef7fb64f7
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.024622    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.024622    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.024622    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.027230    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 b6 f4 03 0a  0a 0a 00 12 04 31 38 34  |ist..........184|
		00000020  30 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |0....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 315435 chars]
	 >
	I0210 12:22:01.028108    5644 system_pods.go:59] 12 kube-system pods found
	I0210 12:22:01.028175    5644 system_pods.go:61] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0210 12:22:01.028175    5644 system_pods.go:61] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:22:01.028236    5644 system_pods.go:61] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:22:01.028236    5644 system_pods.go:61] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0210 12:22:01.028236    5644 system_pods.go:61] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:22:01.028288    5644 system_pods.go:61] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0210 12:22:01.028288    5644 system_pods.go:61] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0210 12:22:01.028288    5644 system_pods.go:74] duration metric: took 10.8464ms to wait for pod list to return data ...
	I0210 12:22:01.028354    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:22:01.028463    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.028486    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:22:01.028486    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.028486    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.028486    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.036769    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:01.036769    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.036769    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.036769    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.037729    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Audit-Id: ece831f4-f081-4eef-9546-3a08239bba6c
	I0210 12:22:01.037729    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.037729    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ef 5e 0a  0a 0a 00 12 04 31 38 34  |List..^......184|
		00000020  30 1a 00 12 d5 25 0a f8  11 0a 10 6d 75 6c 74 69  |0....%.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 37 36 32 38 00 42  |1e01b262.17628.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 59089 chars]
	 >
	I0210 12:22:01.037729    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038747    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038808    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038808    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038865    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:22:01.038865    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:22:01.038865    5644 node_conditions.go:105] duration metric: took 10.5108ms to run NodePressure ...
	I0210 12:22:01.038988    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0210 12:22:01.389598    5644 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0210 12:22:01.617942    5644 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0210 12:22:01.620378    5644 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0210 12:22:01.620540    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.620626    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0210 12:22:01.620626    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.620693    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.620693    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.624944    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:01.625845    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Audit-Id: 22603f78-b146-4e0b-a6d4-76c5eaf1493b
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.625845    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.625845    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.625845    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.627232    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 8f bd 01 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  35 1a 00 12 ad 2d 0a d9  1a 0a 15 65 74 63 64 2d  |5....-.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 30 33 32 34 30 30  |multinode-032400|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 32 36 64 34 31  31 30 66 2d 39 61 33 39  |.*$26d4110f-9a39|
		00000060  2d 34 38 64 65 2d 61 34  33 33 2d 35 36 37 61 37  |-48de-a433-567a7|
		00000070  35 37 38 39 62 65 30 32  04 31 38 31 32 38 00 42  |5789be02.18128.B|
		00000080  08 08 e6 de a7 bd 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 4f 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebO.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 118655 chars]
	 >
	I0210 12:22:01.627944    5644 kubeadm.go:739] kubelet initialised
	I0210 12:22:01.627989    5644 kubeadm.go:740] duration metric: took 7.6112ms waiting for restarted kubelet to initialise ...
	I0210 12:22:01.628044    5644 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:01.628201    5644 type.go:204] "Request Body" body=""
	I0210 12:22:01.628312    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:01.628368    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.628368    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.628434    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.633500    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:01.633500    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.633500    5644 round_trippers.go:587]     Audit-Id: 6761eb08-f31d-401e-9c48-495dbcaa8f15
	I0210 12:22:01.633500    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.633571    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.633571    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.633571    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.633571    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.636591    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9b f0 03 0a  0a 0a 00 12 04 31 38 35  |ist..........185|
		00000020  35 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |5....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 312754 chars]
	 >
	I0210 12:22:01.637286    5644 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.637341    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.637425    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:01.637492    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.637492    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.637492    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.652412    5644 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0210 12:22:01.652668    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.652668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.652668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.652668    5644 round_trippers.go:587]     Audit-Id: e43f9258-de17-4748-938c-8ffd3f3efad2
	I0210 12:22:01.653109    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:01.653348    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.653455    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.653455    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.653455    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.653535    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.656691    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:01.656691    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.656903    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.656903    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.656903    5644 round_trippers.go:587]     Audit-Id: 9b6271b6-10db-4ece-9ab5-5f2cb391cf62
	I0210 12:22:01.657228    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.657380    5644 pod_ready.go:98] node "multinode-032400" hosting pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.657463    5644 pod_ready.go:82] duration metric: took 20.1221ms for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.657463    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.657463    5644 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.657612    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.657636    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:22:01.657692    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.657692    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.657736    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.659797    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.659797    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.659797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.659797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Audit-Id: 5e3fa176-fac7-4476-9419-576207977b28
	I0210 12:22:01.659797    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.660107    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.660458    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ad 2d 0a d9 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.-.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 31  32 38 00 42 08 08 e6 de  |be02.18128.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27798 chars]
	 >
	I0210 12:22:01.660713    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.660796    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.660830    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.660853    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.660853    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.670795    5644 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0210 12:22:01.670795    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.670795    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Audit-Id: 009b6a4e-d9c8-49dc-b5a1-a959b5ef507a
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.670795    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.670795    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.671805    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.671805    5644 pod_ready.go:98] node "multinode-032400" hosting pod "etcd-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.671805    5644 pod_ready.go:82] duration metric: took 14.3418ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.671805    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "etcd-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.671805    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.671805    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.671805    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:22:01.671805    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.671805    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.671805    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.675741    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.675767    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Audit-Id: d69a5666-1be6-4da0-b8d4-2b6601449db8
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.675834    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.675834    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.675834    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.676377    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ef 36 0a e9 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 31 31 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8118.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33804 chars]
	 >
	I0210 12:22:01.676594    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.676653    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.676653    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.676653    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.676727    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.682786    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:01.682786    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.682786    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.683357    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.683357    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.683357    5644 round_trippers.go:587]     Audit-Id: 85f72193-cbb6-4afe-86d7-b962187756a3
	I0210 12:22:01.683660    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.683821    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-apiserver-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.683821    5644 pod_ready.go:82] duration metric: took 12.0158ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.683891    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-apiserver-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.683891    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.683951    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.684014    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:22:01.684014    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.684014    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.684014    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.687813    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:01.687813    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.687813    5644 round_trippers.go:587]     Audit-Id: d05685b2-8ffe-48d1-9037-dae32ff2a9a1
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.687900    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.687900    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.687900    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.688679    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  b1 33 0a d5 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 31 30 38 00 42 08  |9fb4412.18108.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31594 chars]
	 >
	I0210 12:22:01.688891    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.688973    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:01.689091    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.689091    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.689091    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.691800    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:01.691800    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.691800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.691800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.691800    5644 round_trippers.go:587]     Audit-Id: a139c2a2-e3b7-4bb1-95de-6c711636e46e
	I0210 12:22:01.691800    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:01.691800    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-controller-manager-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.691800    5644 pod_ready.go:82] duration metric: took 7.9091ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:01.691800    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-controller-manager-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:01.691800    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:01.691800    5644 type.go:168] "Request Body" body=""
	I0210 12:22:01.822378    5644 request.go:661] Waited for 130.5768ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:22:01.822378    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:22:01.822378    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:01.822378    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:01.822378    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:01.826767    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:01.826854    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:01.826854    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:01 GMT
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Audit-Id: d972e39d-57f5-40ea-97e3-bd8d24011f3f
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:01.826854    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:01.826854    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:01.827328    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:22:01.827506    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.021410    5644 request.go:661] Waited for 193.9028ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:02.021890    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:02.021935    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.022008    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.022008    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.024792    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:02.025684    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Audit-Id: 5547391c-e589-4649-b88c-fe1cd1ade140
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.025684    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.025684    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.025684    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.025988    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:02.026123    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-proxy-rrh82" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:02.026189    5644 pod_ready.go:82] duration metric: took 334.3852ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:02.026189    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-proxy-rrh82" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:02.026189    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.026301    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.220942    5644 request.go:661] Waited for 194.639ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:22:02.221216    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:22:02.221216    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.221216    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.221216    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.224751    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:02.224751    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Audit-Id: a430131e-7290-46a3-8378-1874e4ed1dd4
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.224751    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.224751    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.224751    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.225476    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:22:02.225681    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.421572    5644 request.go:661] Waited for 195.8889ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:22:02.422063    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:22:02.422063    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.422063    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.422063    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.426120    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:02.426120    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.426120    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.426120    5644 round_trippers.go:587]     Audit-Id: 63310cb4-e21f-4bf0-b4fa-a4436afd2f79
	I0210 12:22:02.426297    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.426297    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.426297    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.426571    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:22:02.426865    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:22:02.426865    5644 pod_ready.go:82] duration metric: took 400.6718ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:02.426865    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:22:02.426865    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.426965    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.621158    5644 request.go:661] Waited for 194.0835ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:22:02.621158    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:22:02.621158    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.621158    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.621158    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.625783    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:02.625839    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Audit-Id: ce403f58-8e8a-4d9b-ab86-b591bcbaefc2
	I0210 12:22:02.625839    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.625909    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.625909    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.625909    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.626481    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 25 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 03 36 33 35 38 00  |0d435af832.6358.|
		00000070  42 08 08 d0 d5 a7 bd 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  35 36 36 64 37 62 39 66  |n-hash..566d7b9f|
		000000a0  38 35 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |85Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22671 chars]
	 >
	I0210 12:22:02.626591    5644 type.go:168] "Request Body" body=""
	I0210 12:22:02.820926    5644 request.go:661] Waited for 194.3329ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:22:02.821244    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:22:02.821244    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:02.821244    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:02.821244    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:02.827886    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:02.827886    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:02.827886    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Content-Length: 3464
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:02 GMT
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Audit-Id: 926d335f-6a41-4edc-a75f-e935e0330864
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:02.827886    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:02.827886    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:02.827886    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f1 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 04 31 36 38 32 38 00  |b7a9af0e2.16828.|
		00000060  42 08 08 d0 d5 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16111 chars]
	 >
	I0210 12:22:02.827886    5644 pod_ready.go:93] pod "kube-proxy-xltxj" in "kube-system" namespace has status "Ready":"True"
	I0210 12:22:02.827886    5644 pod_ready.go:82] duration metric: took 400.9163ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.827886    5644 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:02.827886    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.022004    5644 request.go:661] Waited for 194.1156ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:22:03.022004    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:22:03.022004    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.022004    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.022004    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.027621    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:03.027692    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.027692    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.027692    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Audit-Id: c65ecd29-6e7c-4893-8023-1a64bae0b0dc
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.027692    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.029437    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 25 0a bd 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.%.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 30 37 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8078.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 22676 chars]
	 >
	I0210 12:22:03.029690    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.220667    5644 request.go:661] Waited for 190.9086ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.220667    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.220667    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.221275    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.221275    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.225458    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:03.225458    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Audit-Id: 81ebe6dc-aded-491a-af93-9c0264613f58
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.225458    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.225458    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.225458    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.225832    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:03.226059    5644 pod_ready.go:98] node "multinode-032400" hosting pod "kube-scheduler-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:03.226089    5644 pod_ready.go:82] duration metric: took 398.1983ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	E0210 12:22:03.226089    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400" hosting pod "kube-scheduler-multinode-032400" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400" has status "Ready":"False"
	I0210 12:22:03.226089    5644 pod_ready.go:39] duration metric: took 1.5979721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:03.226089    5644 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0210 12:22:03.242649    5644 command_runner.go:130] > -16
	I0210 12:22:03.243171    5644 ops.go:34] apiserver oom_adj: -16
	I0210 12:22:03.243171    5644 kubeadm.go:597] duration metric: took 12.5847251s to restartPrimaryControlPlane
	I0210 12:22:03.243171    5644 kubeadm.go:394] duration metric: took 12.6481931s to StartCluster
	I0210 12:22:03.243171    5644 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:22:03.243426    5644 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:22:03.245169    5644 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:22:03.246385    5644 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0210 12:22:03.246385    5644 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0210 12:22:03.251667    5644 out.go:177] * Verifying Kubernetes components...
	I0210 12:22:03.246916    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:22:03.253672    5644 out.go:177] * Enabled addons: 
	I0210 12:22:03.259671    5644 addons.go:514] duration metric: took 13.3865ms for enable addons: enabled=[]
	I0210 12:22:03.264322    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:22:03.510937    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:22:03.536618    5644 node_ready.go:35] waiting up to 6m0s for node "multinode-032400" to be "Ready" ...
	I0210 12:22:03.536767    5644 type.go:168] "Request Body" body=""
	I0210 12:22:03.536903    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:03.536903    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:03.536903    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:03.536903    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:03.540238    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:03.540238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Audit-Id: fa6ded42-37b6-42de-ae27-4373706be825
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:03.540238    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:03.540238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:03.540932    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:03.540932    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:03 GMT
	I0210 12:22:03.541202    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:04.037321    5644 type.go:168] "Request Body" body=""
	I0210 12:22:04.037321    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:04.037321    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:04.037321    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:04.037321    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:04.041742    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:04.041816    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Audit-Id: abd2b64a-44fa-409b-a3c8-f28c2104a97d
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:04.041816    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:04.041816    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:04.041816    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:04.041894    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:04 GMT
	I0210 12:22:04.042592    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:04.537270    5644 type.go:168] "Request Body" body=""
	I0210 12:22:04.537270    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:04.537722    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:04.537722    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:04.537722    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:04.542227    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:04.542296    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:04.542296    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:04.542296    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:04 GMT
	I0210 12:22:04.542296    5644 round_trippers.go:587]     Audit-Id: 51fbcb0c-18b6-4737-bb65-534b8a59ee1b
	I0210 12:22:04.542763    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.037720    5644 type.go:168] "Request Body" body=""
	I0210 12:22:05.037883    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:05.037883    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:05.037883    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:05.037883    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:05.045193    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:05.045193    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:05.045193    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:05.045193    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:05 GMT
	I0210 12:22:05.045193    5644 round_trippers.go:587]     Audit-Id: 4fc3f950-2374-4f26-8b21-b2272364078f
	I0210 12:22:05.045464    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.537462    5644 type.go:168] "Request Body" body=""
	I0210 12:22:05.537462    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:05.537462    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:05.537462    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:05.537462    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:05.541169    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:05.541169    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Audit-Id: 6b39847c-fb75-45ee-a86e-0fa0b9716c77
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:05.541169    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:05.541169    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:05.541169    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:05 GMT
	I0210 12:22:05.541606    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:05.541815    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:06.037529    5644 type.go:168] "Request Body" body=""
	I0210 12:22:06.037529    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:06.037529    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:06.037529    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:06.037529    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:06.041672    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:06.041672    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Audit-Id: bf5b2110-06ee-4550-9bc9-134874981b51
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:06.041672    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:06.041672    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:06.041672    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:06.041904    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:06 GMT
	I0210 12:22:06.042173    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:06.537134    5644 type.go:168] "Request Body" body=""
	I0210 12:22:06.537643    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:06.537643    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:06.537643    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:06.537643    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:06.545049    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:06.545254    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:06.545254    5644 round_trippers.go:587]     Audit-Id: be88c43b-2bfb-4a92-b0e4-774c4b8ed8c2
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:06.545310    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:06.545310    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:06.545310    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:06 GMT
	I0210 12:22:06.545716    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.037385    5644 type.go:168] "Request Body" body=""
	I0210 12:22:07.037594    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:07.037594    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:07.037594    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:07.037594    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:07.040885    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:07.041302    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Audit-Id: 64e4567d-030a-4ff4-b8bc-0886ef4c407a
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:07.041302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:07.041302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:07.041302    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:07 GMT
	I0210 12:22:07.041629    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.537694    5644 type.go:168] "Request Body" body=""
	I0210 12:22:07.538179    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:07.538259    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:07.538259    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:07.538259    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:07.546279    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:07.546279    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:07.546279    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:07.546279    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:07 GMT
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Audit-Id: a28bc9dd-395b-4895-b3c6-dbd8a334a7c7
	I0210 12:22:07.546279    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:07.546279    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:07.546279    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:08.037341    5644 type.go:168] "Request Body" body=""
	I0210 12:22:08.037909    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:08.037909    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:08.037909    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:08.037909    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:08.041690    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:08.041690    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:08.041690    5644 round_trippers.go:587]     Audit-Id: 77899642-a8d7-4018-b102-91907afd4444
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:08.041763    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:08.041763    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:08.041763    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:08 GMT
	I0210 12:22:08.042601    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:08.537088    5644 type.go:168] "Request Body" body=""
	I0210 12:22:08.537088    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:08.537088    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:08.537427    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:08.537427    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:08.541504    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:08.541504    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:08.541504    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:08.541504    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:08 GMT
	I0210 12:22:08.541504    5644 round_trippers.go:587]     Audit-Id: d299b160-cef4-4c83-9753-1eeb230ab6de
	I0210 12:22:08.541504    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:09.037125    5644 type.go:168] "Request Body" body=""
	I0210 12:22:09.037125    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:09.037125    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:09.037125    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:09.037125    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:09.043980    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:09.043980    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Audit-Id: 9341f127-e561-4b1d-99d8-e651eee068e5
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:09.043980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:09.043980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:09.043980    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:09 GMT
	I0210 12:22:09.044537    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:09.537949    5644 type.go:168] "Request Body" body=""
	I0210 12:22:09.538067    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:09.538067    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:09.538067    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:09.538067    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:09.541508    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:09.541508    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:09.541508    5644 round_trippers.go:587]     Audit-Id: 5d30e869-44a7-4d4b-8720-2aa4e6554a09
	I0210 12:22:09.541508    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:09.542088    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:09.542088    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:09.542088    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:09.542088    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:09 GMT
	I0210 12:22:09.542353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:10.037165    5644 type.go:168] "Request Body" body=""
	I0210 12:22:10.037165    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:10.037165    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:10.037165    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:10.037165    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:10.041588    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:10.041588    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:10.041588    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:10.041588    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:10 GMT
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Audit-Id: e3cf349f-f9e8-4a72-846d-095f4465c548
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:10.041711    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:10.042059    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:10.042378    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:10.538442    5644 type.go:168] "Request Body" body=""
	I0210 12:22:10.539157    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:10.539157    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:10.539157    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:10.539157    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:10.543238    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:10.543238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Audit-Id: 2ac4f8f2-9074-439b-8c13-d954dbd918f2
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:10.543238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:10.543238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:10.543238    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:10 GMT
	I0210 12:22:10.543566    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:11.037528    5644 type.go:168] "Request Body" body=""
	I0210 12:22:11.037528    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:11.037528    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:11.037528    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:11.037528    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:11.041768    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:11.042059    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:11.042059    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:11.042059    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:11 GMT
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Audit-Id: e192e15b-0849-427f-abd9-4a2c39c4cc42
	I0210 12:22:11.042059    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:11.042606    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:11.536828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:11.537436    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:11.537436    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:11.537531    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:11.537580    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:11.541245    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:11.541245    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:11.541245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:11.541245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:11 GMT
	I0210 12:22:11.541245    5644 round_trippers.go:587]     Audit-Id: e54b2da1-4c31-4151-80be-1cf0b5d0a915
	I0210 12:22:11.541960    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 9b 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  34 38 38 00 42 08 08 86  |1b262.18488.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23294 chars]
	 >
	I0210 12:22:12.037307    5644 type.go:168] "Request Body" body=""
	I0210 12:22:12.037307    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:12.037307    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:12.037307    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:12.037307    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:12.041927    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:12.041927    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:12.041927    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:12.041927    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:12.041927    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:12 GMT
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Audit-Id: 84e5b282-5da0-4f1b-a4f7-e6df27b7e40c
	I0210 12:22:12.042035    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:12.042387    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:12.042600    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:12.537691    5644 type.go:168] "Request Body" body=""
	I0210 12:22:12.537691    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:12.537691    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:12.537691    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:12.537691    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:12.541522    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:12.541522    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:12.541522    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:12.541522    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:12 GMT
	I0210 12:22:12.541522    5644 round_trippers.go:587]     Audit-Id: b857c99c-04e9-4801-bce6-27bfc535ac84
	I0210 12:22:12.541623    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:12.541970    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:13.037028    5644 type.go:168] "Request Body" body=""
	I0210 12:22:13.037028    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:13.037028    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:13.037028    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:13.037028    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:13.040595    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:13.040595    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:13.040595    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:13.040595    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:13 GMT
	I0210 12:22:13.040595    5644 round_trippers.go:587]     Audit-Id: d2adab42-9e63-4d5a-ae30-64f73b3d8ae9
	I0210 12:22:13.040815    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:13.537795    5644 type.go:168] "Request Body" body=""
	I0210 12:22:13.537795    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:13.537795    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:13.537795    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:13.537795    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:13.542094    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:13.542185    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:13.542221    5644 round_trippers.go:587]     Audit-Id: c3b97da1-db73-4f77-bb86-4bdad48f1504
	I0210 12:22:13.542221    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:13.542246    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:13.542246    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:13.542268    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:13.542268    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:13 GMT
	I0210 12:22:13.542415    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:14.037395    5644 type.go:168] "Request Body" body=""
	I0210 12:22:14.037395    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:14.037395    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:14.037395    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:14.037395    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:14.042533    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:14.042533    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:14.042533    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:14.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:14.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:14.042533    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:14 GMT
	I0210 12:22:14.042643    5644 round_trippers.go:587]     Audit-Id: 6d6d4848-78ca-4808-a6e0-933668f77058
	I0210 12:22:14.042643    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:14.043257    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:14.043413    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:14.538012    5644 type.go:168] "Request Body" body=""
	I0210 12:22:14.538163    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:14.538163    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:14.538239    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:14.538239    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:14.542340    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:14.542340    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:14.542340    5644 round_trippers.go:587]     Audit-Id: 7c200f36-8738-4dd9-8c5d-25f5c1a95819
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:14.542433    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:14.542433    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:14.542433    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:14 GMT
	I0210 12:22:14.542758    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:15.038288    5644 type.go:168] "Request Body" body=""
	I0210 12:22:15.038438    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:15.038438    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:15.038438    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:15.038438    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:15.042063    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:15.042164    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:15.042164    5644 round_trippers.go:587]     Audit-Id: 2750ee7f-c29d-4daf-b6eb-31ed6d57d32b
	I0210 12:22:15.042164    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:15.042231    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:15.042231    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:15.042231    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:15.042231    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:15 GMT
	I0210 12:22:15.042505    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:15.537427    5644 type.go:168] "Request Body" body=""
	I0210 12:22:15.537427    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:15.537427    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:15.537427    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:15.537427    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:15.541771    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:15.541771    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:15.541771    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:15.541771    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:15 GMT
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Audit-Id: 92ca9d5c-3a1a-49c4-8fd7-a540874a2a53
	I0210 12:22:15.541771    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:15.542188    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.037861    5644 type.go:168] "Request Body" body=""
	I0210 12:22:16.037861    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:16.037861    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:16.037861    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:16.037861    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:16.042323    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:16.042323    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Audit-Id: 818e3d1b-fbcd-4f8f-ba31-c78b37fa4bde
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:16.042323    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:16.042323    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:16.042323    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:16 GMT
	I0210 12:22:16.042852    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.537060    5644 type.go:168] "Request Body" body=""
	I0210 12:22:16.537060    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:16.537060    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:16.537060    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:16.537060    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:16.541283    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:16.541391    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Audit-Id: f8d5376f-326e-41af-addf-93a857cc2b02
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:16.541391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:16.541391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:16.541391    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:16 GMT
	I0210 12:22:16.541711    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:16.541880    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:17.037349    5644 type.go:168] "Request Body" body=""
	I0210 12:22:17.037349    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:17.037349    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:17.037349    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:17.037349    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:17.041240    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:17.041240    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:17.041298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:17 GMT
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Audit-Id: aacc6765-a652-46c9-b844-a154ec168641
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:17.041298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:17.041298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:17.041581    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:17.537350    5644 type.go:168] "Request Body" body=""
	I0210 12:22:17.537350    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:17.537350    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:17.537350    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:17.537350    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:17.545459    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:17.545514    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:17.545646    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:17.545710    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:17 GMT
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Audit-Id: 8529d8ad-c44c-4dfc-ae39-926238206648
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:17.545710    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:17.545971    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.037400    5644 type.go:168] "Request Body" body=""
	I0210 12:22:18.037400    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:18.037400    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:18.037400    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:18.037400    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:18.041455    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:18.041455    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Audit-Id: 964fa4d0-8da8-4139-8dec-f0e683b27fa6
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:18.041528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:18.041528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:18.041528    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:18 GMT
	I0210 12:22:18.041931    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.537832    5644 type.go:168] "Request Body" body=""
	I0210 12:22:18.537956    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:18.538024    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:18.538024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:18.538024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:18.541347    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:18.541821    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:18.541821    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:18.541821    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:18.541821    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:18.541881    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:18.541881    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:18 GMT
	I0210 12:22:18.541881    5644 round_trippers.go:587]     Audit-Id: c1db960f-b93c-4927-8c65-367d732effde
	I0210 12:22:18.542088    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:18.542088    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:19.037607    5644 type.go:168] "Request Body" body=""
	I0210 12:22:19.037607    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:19.037607    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:19.037607    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:19.037607    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:19.042265    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:19.042343    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Audit-Id: 22d1da5b-b131-486a-a44b-1503026eeeea
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:19.042343    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:19.042343    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:19.042343    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:19.042531    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:19 GMT
	I0210 12:22:19.043634    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:19.536964    5644 type.go:168] "Request Body" body=""
	I0210 12:22:19.536964    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:19.536964    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:19.536964    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:19.536964    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:19.542503    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:19.542503    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Audit-Id: 6d7fc6de-9144-4d89-9585-18698077d2be
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:19.542503    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:19.542503    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:19.542503    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:19 GMT
	I0210 12:22:19.542503    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.037821    5644 type.go:168] "Request Body" body=""
	I0210 12:22:20.037821    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:20.037821    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:20.037821    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:20.037821    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:20.042114    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:20.042114    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:20 GMT
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Audit-Id: 756ac361-dd50-436f-8ef8-da2f281dfaff
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:20.042466    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:20.042466    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:20.042466    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:20.043161    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.537618    5644 type.go:168] "Request Body" body=""
	I0210 12:22:20.537618    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:20.537618    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:20.537618    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:20.537618    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:20.541721    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:20.541781    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:20.541781    5644 round_trippers.go:587]     Audit-Id: 90aa441a-726e-43c6-b49f-d4c2b93778b5
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:20.541836    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:20.541836    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:20.541836    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:20 GMT
	I0210 12:22:20.542164    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:20.542378    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:21.037240    5644 type.go:168] "Request Body" body=""
	I0210 12:22:21.037240    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:21.037240    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:21.037240    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:21.037240    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:21.047865    5644 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0210 12:22:21.047865    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Audit-Id: abb55c04-108e-4c4c-b34b-4d93f07def94
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:21.047970    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:21.047970    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:21.047970    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:21 GMT
	I0210 12:22:21.048278    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:21.537900    5644 type.go:168] "Request Body" body=""
	I0210 12:22:21.537900    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:21.537900    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:21.537900    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:21.538345    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:21.542100    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:21.542100    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Audit-Id: 86da1d43-daf3-4847-91ba-950425c84756
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:21.542100    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:21.542100    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:21.542100    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:21 GMT
	I0210 12:22:21.542100    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:22.037061    5644 type.go:168] "Request Body" body=""
	I0210 12:22:22.037061    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:22.037061    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:22.037061    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:22.037061    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:22.041707    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:22.041707    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:22 GMT
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Audit-Id: 360010ee-a35e-4818-8297-785b723e51ca
	I0210 12:22:22.041707    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:22.041813    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:22.041813    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:22.041813    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:22.042064    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:22.537000    5644 type.go:168] "Request Body" body=""
	I0210 12:22:22.537662    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:22.537662    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:22.537662    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:22.537662    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:22.540879    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:22.540879    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:22.540879    5644 round_trippers.go:587]     Audit-Id: 8e412ace-a303-4296-a06f-36240bc53dfe
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:22.541004    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:22.541004    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:22.541004    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:22 GMT
	I0210 12:22:22.541307    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:23.037750    5644 type.go:168] "Request Body" body=""
	I0210 12:22:23.038305    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:23.038305    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:23.038305    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:23.038305    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:23.043899    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:23.043899    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Audit-Id: e84ea73a-a74a-44a0-bd4e-1e8137d1e313
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:23.043899    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:23.043899    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:23.043899    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:23 GMT
	I0210 12:22:23.044518    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:23.044712    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:23.537694    5644 type.go:168] "Request Body" body=""
	I0210 12:22:23.537983    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:23.537983    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:23.538038    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:23.538038    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:23.544944    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:23.544944    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Audit-Id: 5cbdbc7d-9d94-47e0-a66f-819cb19047f2
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:23.544944    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:23.545486    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:23.545486    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:23.545486    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:23 GMT
	I0210 12:22:23.545679    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:24.037541    5644 type.go:168] "Request Body" body=""
	I0210 12:22:24.037541    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:24.037541    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:24.037541    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:24.037541    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:24.042357    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:24.042357    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:24.042473    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:24 GMT
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Audit-Id: ba0a73dc-0f6f-4a23-8433-fc0f4ae3075d
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:24.042473    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:24.042473    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:24.042968    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:24.537630    5644 type.go:168] "Request Body" body=""
	I0210 12:22:24.537662    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:24.537662    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:24.537662    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:24.537662    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:24.540787    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:24.540787    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:24.540787    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:24 GMT
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Audit-Id: 6b7e975c-d22c-40d3-b5ef-ff7b55e695ec
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:24.540787    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:24.540787    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:24.540787    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.037769    5644 type.go:168] "Request Body" body=""
	I0210 12:22:25.037769    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:25.037769    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:25.038260    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:25.038260    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:25.041435    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:25.041516    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Audit-Id: b07d609b-b54c-4c08-9d53-63932d8aef92
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:25.041516    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:25.041516    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:25.041516    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:25.041636    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:25 GMT
	I0210 12:22:25.041927    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.538035    5644 type.go:168] "Request Body" body=""
	I0210 12:22:25.538227    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:25.538317    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:25.538317    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:25.538317    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:25.545947    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:25.545947    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Audit-Id: 9eac72a2-04d8-48e2-b969-62d1dfd99cc2
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:25.545947    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:25.545947    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:25.545947    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:25 GMT
	I0210 12:22:25.546517    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:25.546517    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:26.037380    5644 type.go:168] "Request Body" body=""
	I0210 12:22:26.037380    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:26.037380    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:26.037380    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:26.037380    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:26.041815    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:26.041815    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Audit-Id: 75af28c9-307b-4bbd-bd14-a495eb34b1c8
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:26.041815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:26.041815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:26.041815    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:26 GMT
	I0210 12:22:26.041815    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:26.537304    5644 type.go:168] "Request Body" body=""
	I0210 12:22:26.537304    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:26.537304    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:26.537304    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:26.537304    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:26.542133    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:26.542133    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:26.542244    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:26 GMT
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Audit-Id: 3cad3847-4738-44d0-ba54-390dcbf6b9f4
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:26.542244    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:26.542244    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:26.542591    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:27.037083    5644 type.go:168] "Request Body" body=""
	I0210 12:22:27.037083    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:27.037083    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:27.037083    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:27.037083    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:27.040620    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:27.040733    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:27.040733    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:27 GMT
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Audit-Id: dcd8b849-3281-448a-a1ec-28b19418355e
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:27.040733    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:27.040815    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:27.041127    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:27.537802    5644 type.go:168] "Request Body" body=""
	I0210 12:22:27.537802    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:27.537802    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:27.537802    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:27.537802    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:27.542165    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:27.542238    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:27.542238    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:27.542238    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:27 GMT
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Audit-Id: 58239b8e-8664-4779-9935-d3a6afa98b92
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:27.542302    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:27.542302    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:27.542603    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:28.037187    5644 type.go:168] "Request Body" body=""
	I0210 12:22:28.037187    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:28.037187    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:28.037187    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:28.037187    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:28.040452    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:28.040452    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:28 GMT
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Audit-Id: c1db4641-6a62-4ac0-9357-d739c314e423
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:28.040452    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:28.040452    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:28.040452    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:28.041572    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:28.041750    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:28.537828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:28.537828    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:28.537828    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:28.537828    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:28.537828    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:28.542245    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:28.542245    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:28.542313    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:28.542328    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:28 GMT
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Audit-Id: 0b135244-23b0-4b9d-91b0-a745a9a40f1a
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:28.542328    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:28.542660    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:29.037264    5644 type.go:168] "Request Body" body=""
	I0210 12:22:29.037264    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:29.037264    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:29.037264    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:29.037264    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:29.041314    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:29.041391    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:29 GMT
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Audit-Id: 63985d8b-91b2-436d-ab86-733129b37320
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:29.041391    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:29.041391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:29.041391    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:29.041567    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:29.537573    5644 type.go:168] "Request Body" body=""
	I0210 12:22:29.537573    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:29.537573    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:29.537573    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:29.537573    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:29.542181    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:29.542255    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Audit-Id: d549fcfe-aaea-46bb-8d97-fda04801b3ee
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:29.542255    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:29.542255    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:29.542255    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:29.542322    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:29 GMT
	I0210 12:22:29.542549    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:30.038037    5644 type.go:168] "Request Body" body=""
	I0210 12:22:30.038037    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:30.038037    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:30.038037    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:30.038037    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:30.042566    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:30.042566    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:30.042566    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:30.042566    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:30 GMT
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Audit-Id: df4403b3-bdc9-4a4a-938d-11696385d0ee
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:30.042679    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:30.042757    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:30.042757    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:30.537957    5644 type.go:168] "Request Body" body=""
	I0210 12:22:30.537957    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:30.537957    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:30.537957    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:30.537957    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:30.542931    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:30.542931    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:30.542931    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:30.542931    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:30 GMT
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Audit-Id: 4d896a63-5499-4e08-abb7-abbc3acd6d3a
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:30.542931    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:30.543473    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:31.037644    5644 type.go:168] "Request Body" body=""
	I0210 12:22:31.037644    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:31.037644    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:31.037644    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:31.037644    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:31.042318    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:31.042398    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Audit-Id: 1ea817b9-8a94-4955-96d0-1a40f8cf3613
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:31.042398    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:31.042479    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:31.042479    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:31.042479    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:31 GMT
	I0210 12:22:31.042639    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:31.537798    5644 type.go:168] "Request Body" body=""
	I0210 12:22:31.537798    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:31.537798    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:31.537798    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:31.537798    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:31.541166    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:31.542091    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:31.542091    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:31 GMT
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Audit-Id: 185afa78-aaeb-4f78-8bba-c5388c0e3d2d
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:31.542091    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:31.542091    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:31.543006    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:32.038140    5644 type.go:168] "Request Body" body=""
	I0210 12:22:32.038245    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:32.038353    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:32.038353    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:32.038353    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:32.041808    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:32.041808    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:32.042808    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:32.042808    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:32 GMT
	I0210 12:22:32.042854    5644 round_trippers.go:587]     Audit-Id: 30de632f-ee6c-4b0a-adb9-e93b632052bd
	I0210 12:22:32.042854    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:32.042901    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:32.042901    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:32.043464    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:32.043680    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:32.537667    5644 type.go:168] "Request Body" body=""
	I0210 12:22:32.537774    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:32.537774    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:32.537774    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:32.537774    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:32.543980    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:32.543980    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:32.543980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:32.543980    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:32 GMT
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Audit-Id: ef39631a-b175-4a86-869e-898b7d179789
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:32.543980    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:32.544692    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:33.037489    5644 type.go:168] "Request Body" body=""
	I0210 12:22:33.037489    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:33.037489    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:33.037489    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:33.037489    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:33.045317    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:33.045866    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Audit-Id: 6d4fb724-c763-48a0-a607-d4de82ea5b42
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:33.045866    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:33.045866    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:33.045866    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:33 GMT
	I0210 12:22:33.046284    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:33.537949    5644 type.go:168] "Request Body" body=""
	I0210 12:22:33.538184    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:33.538276    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:33.538276    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:33.538276    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:33.541564    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:33.542389    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:33.542389    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:33.542389    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:33 GMT
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Audit-Id: b259efe3-76c6-422a-8075-cfb674e6bb96
	I0210 12:22:33.542389    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:33.542624    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.037548    5644 type.go:168] "Request Body" body=""
	I0210 12:22:34.037966    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:34.037966    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:34.038055    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:34.038156    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:34.042708    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:34.042708    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Audit-Id: 1b200805-74ed-4ac0-a833-d0608c52db40
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:34.042708    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:34.042708    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:34.042826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:34.042826    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:34 GMT
	I0210 12:22:34.043047    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.537536    5644 type.go:168] "Request Body" body=""
	I0210 12:22:34.537536    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:34.537536    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:34.537536    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:34.537536    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:34.541398    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:34.541607    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:34.541607    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:34.541607    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:34 GMT
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Audit-Id: a98ad919-61fc-4001-b125-da23b15c7c46
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:34.541607    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:34.541965    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:34.542145    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:35.038020    5644 type.go:168] "Request Body" body=""
	I0210 12:22:35.038020    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:35.038020    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:35.038020    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:35.038020    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:35.042248    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:35.042641    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:35.042641    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:35.042641    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:35 GMT
	I0210 12:22:35.042641    5644 round_trippers.go:587]     Audit-Id: 7e5b8f17-86f0-49d7-a8e8-72a65436117a
	I0210 12:22:35.042961    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:35.537446    5644 type.go:168] "Request Body" body=""
	I0210 12:22:35.538027    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:35.538098    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:35.538098    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:35.538156    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:35.541834    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:35.541940    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:35 GMT
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Audit-Id: b348d649-13f8-4b62-add7-942da6439f22
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:35.541940    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:35.542006    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:35.542006    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:35.542235    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.037730    5644 type.go:168] "Request Body" body=""
	I0210 12:22:36.039533    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:36.039533    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:36.039533    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:36.039533    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:36.043130    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:36.043727    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:36 GMT
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Audit-Id: 2f94fc9e-81e1-413f-920d-e5f53402577d
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:36.043727    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:36.043727    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:36.043804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:36.044095    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.538286    5644 type.go:168] "Request Body" body=""
	I0210 12:22:36.538414    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:36.538414    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:36.538476    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:36.538476    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:36.542513    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:36.542513    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:36.542513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:36.542513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:36 GMT
	I0210 12:22:36.542513    5644 round_trippers.go:587]     Audit-Id: 00b408ee-10d7-4a08-9728-c430f3099082
	I0210 12:22:36.543547    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:36.543748    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:37.038150    5644 type.go:168] "Request Body" body=""
	I0210 12:22:37.038264    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:37.038356    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:37.038356    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:37.038394    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:37.042142    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:37.042142    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:37.042142    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:37.042142    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:37 GMT
	I0210 12:22:37.042142    5644 round_trippers.go:587]     Audit-Id: 324c1936-6098-4f03-9848-abaa30412438
	I0210 12:22:37.042233    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:37.042501    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:37.537176    5644 type.go:168] "Request Body" body=""
	I0210 12:22:37.537176    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:37.537176    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:37.537176    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:37.537176    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:37.541297    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:37.541370    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:37.541370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:37 GMT
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Audit-Id: a07c47e4-dd19-468a-8e26-731a12389cad
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:37.541370    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:37.541447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:37.541824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.037869    5644 type.go:168] "Request Body" body=""
	I0210 12:22:38.038024    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:38.038024    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:38.038024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:38.038085    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:38.042275    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:38.042275    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Audit-Id: 08637112-ed03-49a0-97e3-9d82b06e1933
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:38.042275    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:38.042275    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:38.042275    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:38 GMT
	I0210 12:22:38.042275    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.538293    5644 type.go:168] "Request Body" body=""
	I0210 12:22:38.538493    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:38.538493    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:38.538493    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:38.538493    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:38.544826    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:38.544826    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:38.544826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:38 GMT
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Audit-Id: c76b9185-9297-4dbc-94fe-165ee98ed0ce
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:38.544826    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:38.544826    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:38.545791    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:38.545791    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:39.038315    5644 type.go:168] "Request Body" body=""
	I0210 12:22:39.038408    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:39.038408    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:39.038408    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:39.038408    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:39.041816    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:39.042485    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:39.042485    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:39.042485    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:39 GMT
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Audit-Id: f8e4d214-bf79-4dff-8b3a-d5b03952e390
	I0210 12:22:39.042485    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:39.042823    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:39.537230    5644 type.go:168] "Request Body" body=""
	I0210 12:22:39.537230    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:39.537230    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:39.537230    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:39.537230    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:39.541650    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:39.541745    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Audit-Id: 8df6bf4f-e049-4efb-b620-5818af6075ab
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:39.541745    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:39.541745    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:39.541809    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:39.541809    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:39 GMT
	I0210 12:22:39.541809    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:40.038037    5644 type.go:168] "Request Body" body=""
	I0210 12:22:40.038037    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:40.038037    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:40.038037    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:40.038037    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:40.042458    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:40.042458    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:40.042458    5644 round_trippers.go:587]     Audit-Id: e5b2aa00-1559-4026-bdc6-4d8f4793a8de
	I0210 12:22:40.042458    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:40.042539    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:40.042539    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:40.042539    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:40.042539    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:40 GMT
	I0210 12:22:40.042762    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:40.537553    5644 type.go:168] "Request Body" body=""
	I0210 12:22:40.537553    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:40.537553    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:40.537553    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:40.537553    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:40.541762    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:40.541762    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:40 GMT
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Audit-Id: 9256173d-8092-49f4-8be6-389fce44fb1f
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:40.541762    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:40.541762    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:40.541762    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:40.542152    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:41.038122    5644 type.go:168] "Request Body" body=""
	I0210 12:22:41.038122    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:41.038122    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:41.038122    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:41.038122    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:41.045361    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:41.045361    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Audit-Id: 50b6f874-b318-4012-86ba-1fa578d6b6c2
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:41.045361    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:41.045361    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:41.045361    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:41 GMT
	I0210 12:22:41.045909    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:41.046106    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:41.537546    5644 type.go:168] "Request Body" body=""
	I0210 12:22:41.538013    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:41.538090    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:41.538090    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:41.538090    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:41.543564    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:41.543564    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:41 GMT
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Audit-Id: 74f8823b-1aed-42e8-aa17-8052e866a7e0
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:41.543564    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:41.543564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:41.543564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:41.544232    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:42.038352    5644 type.go:168] "Request Body" body=""
	I0210 12:22:42.038464    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:42.038464    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:42.038464    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:42.038464    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:42.041806    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:42.042623    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:42.042623    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:42.042623    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:42 GMT
	I0210 12:22:42.042623    5644 round_trippers.go:587]     Audit-Id: ae0c1ec9-1df4-4a4b-84fc-c2823acc3696
	I0210 12:22:42.043018    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:42.537193    5644 type.go:168] "Request Body" body=""
	I0210 12:22:42.537933    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:42.537971    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:42.538006    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:42.538024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:42.541318    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:42.541318    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:42.541318    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:42.541318    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:42.541715    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:42.541715    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:42.541715    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:42 GMT
	I0210 12:22:42.541715    5644 round_trippers.go:587]     Audit-Id: 08f4b765-fbfd-4d80-9d64-baad6a133a42
	I0210 12:22:42.542158    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.037603    5644 type.go:168] "Request Body" body=""
	I0210 12:22:43.037603    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:43.037603    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:43.037603    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:43.037603    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:43.040513    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:43.040513    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:43.040513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:43.040513    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:43 GMT
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Audit-Id: 736c74f5-eed9-45f6-80d5-67e0b90e682b
	I0210 12:22:43.040513    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:43.041630    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.537209    5644 type.go:168] "Request Body" body=""
	I0210 12:22:43.537209    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:43.537209    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:43.537209    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:43.537209    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:43.541820    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:43.541820    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Audit-Id: c80acae4-89e0-41fa-b8c8-fc585ea7400c
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:43.541930    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:43.541930    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:43.541930    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:43 GMT
	I0210 12:22:43.542327    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:43.542666    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:44.037731    5644 type.go:168] "Request Body" body=""
	I0210 12:22:44.037918    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:44.037918    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:44.037918    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:44.037918    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:44.041613    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:44.041613    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:44 GMT
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Audit-Id: f8d589ff-a4ab-4dec-bebe-017fed68bed8
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:44.041705    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:44.041705    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:44.041705    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:44.042101    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:44.538421    5644 type.go:168] "Request Body" body=""
	I0210 12:22:44.538599    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:44.538599    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:44.538599    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:44.538599    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:44.546276    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:44.546276    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:44.546276    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:44.546276    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:44.546360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:44.546360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:44.546360    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:44 GMT
	I0210 12:22:44.546360    5644 round_trippers.go:587]     Audit-Id: 0a84a50a-bfb8-456e-9a42-5565194a13d2
	I0210 12:22:44.546735    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.037979    5644 type.go:168] "Request Body" body=""
	I0210 12:22:45.037979    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:45.037979    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:45.037979    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:45.037979    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:45.042441    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:45.042441    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:45.042441    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:45.042441    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:45.042523    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:45.042523    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:45.042523    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:45 GMT
	I0210 12:22:45.042523    5644 round_trippers.go:587]     Audit-Id: 2876f365-fe91-4c06-a097-143595036448
	I0210 12:22:45.042797    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.537552    5644 type.go:168] "Request Body" body=""
	I0210 12:22:45.537724    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:45.537724    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:45.537724    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:45.537724    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:45.544834    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:45.544900    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:45.544900    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:45 GMT
	I0210 12:22:45.544900    5644 round_trippers.go:587]     Audit-Id: de15bf11-9f33-4b61-8f8e-dae7f65222e1
	I0210 12:22:45.544936    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:45.544936    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:45.544936    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:45.544936    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:45.544936    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:45.544936    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:46.038592    5644 type.go:168] "Request Body" body=""
	I0210 12:22:46.038686    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:46.038686    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:46.038686    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:46.038758    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:46.045833    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:22:46.045891    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:46.045891    5644 round_trippers.go:587]     Audit-Id: 22b83ad1-4655-4357-bfc0-67723344666e
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:46.045955    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:46.045955    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:46.045955    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:46 GMT
	I0210 12:22:46.046075    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:46.538346    5644 type.go:168] "Request Body" body=""
	I0210 12:22:46.538346    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:46.538346    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:46.538346    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:46.538346    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:46.542590    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:46.542590    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Audit-Id: 71487f7e-6aee-4ea5-bbfa-03582cfdc264
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:46.542655    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:46.542655    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:46.542655    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:46 GMT
	I0210 12:22:46.543316    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.037290    5644 type.go:168] "Request Body" body=""
	I0210 12:22:47.037290    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:47.037290    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:47.037290    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:47.037290    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:47.042399    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:47.042510    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Audit-Id: 34469633-9573-4d58-b5cf-f33bc9097855
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:47.042510    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:47.042510    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:47.042510    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:47 GMT
	I0210 12:22:47.043245    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.538311    5644 type.go:168] "Request Body" body=""
	I0210 12:22:47.538311    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:47.538311    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:47.538311    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:47.538311    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:47.544441    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:47.544441    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:47.544441    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:47 GMT
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Audit-Id: bf7cb7ee-db27-455f-a890-2a4269325633
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:47.544441    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:47.544441    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:47.545393    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:47.545393    5644 node_ready.go:53] node "multinode-032400" has status "Ready":"False"
	I0210 12:22:48.038059    5644 type.go:168] "Request Body" body=""
	I0210 12:22:48.038059    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:48.038059    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:48.038059    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:48.038059    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:48.042401    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:48.042533    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Audit-Id: 3052dc70-0769-40dd-92e4-cf3b0730091c
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:48.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:48.042533    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:48.042533    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:48 GMT
	I0210 12:22:48.042888    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:48.538057    5644 type.go:168] "Request Body" body=""
	I0210 12:22:48.538057    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:48.538057    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:48.538057    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:48.538057    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:48.541126    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:48.541126    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:48 GMT
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Audit-Id: baab82d5-1f0c-47eb-aa1e-60ff3f340388
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:48.542138    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:48.542138    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:48.542190    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:48.542527    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d3 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 38  38 36 38 00 42 08 08 86  |1b262.18868.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23539 chars]
	 >
	I0210 12:22:49.037731    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.037731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.037731    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.037731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.037731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.041772    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.041772    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Audit-Id: e583d5ed-8eac-4e4b-8597-f08ef5bea8fb
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.041772    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.041772    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.041772    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.041772    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:49.041772    5644 node_ready.go:49] node "multinode-032400" has status "Ready":"True"
	I0210 12:22:49.041772    5644 node_ready.go:38] duration metric: took 45.5046482s for node "multinode-032400" to be "Ready" ...
	I0210 12:22:49.041772    5644 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:22:49.041772    5644 type.go:204] "Request Body" body=""
	I0210 12:22:49.041772    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:22:49.041772    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.041772    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.041772    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.045746    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.045746    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.045746    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.045746    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Audit-Id: 51a428a7-6d80-4d58-a291-a6bb3efdf1bd
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.045746    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.047716    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 9d ea 03 0a  0a 0a 00 12 04 31 39 33  |ist..........193|
		00000020  31 1a 00 12 84 29 0a 99  19 0a 18 63 6f 72 65 64  |1....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  38 32 30 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |8208.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308964 chars]
	 >
	I0210 12:22:49.047716    5644 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:22:49.048716    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.048716    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:49.048716    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.048716    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.048716    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.051878    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.051878    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Audit-Id: f7c36739-a3f4-427e-95d0-313e3de1124f
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.051953    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.051953    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.051953    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.051953    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:49.052662    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.052882    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.052882    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.052882    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.052882    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.055662    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.055662    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.055662    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Audit-Id: 0eaab311-c4e2-4518-ad27-0efa5f1e56e2
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.055662    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.055662    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.055662    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:49.547818    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.547818    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:49.547818    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.547818    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.547818    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.551088    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:49.551949    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.551949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.551949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Audit-Id: cd180aa0-48b2-4d74-9fdc-ac8e27a3130b
	I0210 12:22:49.551949    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.552441    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:49.552810    5644 type.go:168] "Request Body" body=""
	I0210 12:22:49.552810    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:49.552901    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:49.552950    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:49.553029    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:49.555778    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:49.555868    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:49.555868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:49.555868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:49 GMT
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Audit-Id: e434e79c-7cfd-4bb2-9a40-f56334afd1a2
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:49.555868    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:49.556326    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:50.048728    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.048728    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:50.048728    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.048728    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.048728    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.053262    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:50.053327    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.053327    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Audit-Id: 50236ee3-2b65-4549-bb8b-bf221251506d
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.053327    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.053327    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.054104    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:50.054401    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.054401    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:50.054401    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.054401    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.054401    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.056950    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:50.057868    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Audit-Id: 325b7450-d66e-40ef-8966-98729cb385ef
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.057868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.057868    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.057868    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.058132    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:50.548469    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.548469    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:50.548469    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.548469    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.548469    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.551804    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:50.552484    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Audit-Id: a6f47eff-209d-4cde-9d78-2ec52d3b93f7
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.552484    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.552484    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.552484    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.552824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:50.553074    5644 type.go:168] "Request Body" body=""
	I0210 12:22:50.553189    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:50.553189    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:50.553237    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:50.553237    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:50.556155    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:50.556155    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:50.556155    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:50.556155    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:50.556259    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:50 GMT
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Audit-Id: 3ca49763-c220-4134-8a23-03acbdc88f2b
	I0210 12:22:50.556259    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:50.556519    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:51.047943    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.047943    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:51.047943    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.047943    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.047943    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.052856    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:51.052856    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.052856    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Audit-Id: ec3e6855-d536-4882-b2f2-b41cefd1df45
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.052856    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.052856    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.052856    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:51.053540    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.053540    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:51.053540    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.053540    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.053540    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.056322    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:51.056322    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.056322    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.056322    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.056322    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Audit-Id: cf9f53bc-e6a3-4bc6-ae66-08dc5875524c
	I0210 12:22:51.056420    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.056488    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 31 38 00 42 08 08 86  |1b262.19318.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22597 chars]
	 >
	I0210 12:22:51.056488    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:51.548592    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.548592    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:51.548592    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.548592    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.548592    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.560667    5644 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0210 12:22:51.560667    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.560667    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Audit-Id: 7cd86dcc-ab75-4a50-8269-3de1253d26c6
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.560667    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.560667    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.561741    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:51.561910    5644 type.go:168] "Request Body" body=""
	I0210 12:22:51.561910    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:51.561910    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:51.561910    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:51.561910    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:51.573072    5644 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0210 12:22:51.573072    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Audit-Id: bc09745c-2163-48c1-8570-c0891179443f
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:51.573072    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:51.573072    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:51.573072    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:51 GMT
	I0210 12:22:51.573641    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:52.048778    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.048778    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:52.048778    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.048778    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.048778    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.053261    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:52.053653    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.053653    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.053653    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.053653    5644 round_trippers.go:587]     Audit-Id: 0d3a5b1e-7779-48b7-8210-3f6bbf191e31
	I0210 12:22:52.054149    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:52.054383    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.054460    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:52.054460    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.054499    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.054499    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.057162    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:22:52.057195    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Audit-Id: bb2559e6-bae0-454d-9d8b-c46a78e46dc0
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.057195    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.057195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.057195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.057265    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.057483    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:52.547895    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.547895    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:52.547895    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.547895    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.547895    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.552832    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:52.552923    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Audit-Id: d3f32117-3726-4f3f-a5b4-6d2ead8d7478
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.552949    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.552949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.552949    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.553353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:52.553582    5644 type.go:168] "Request Body" body=""
	I0210 12:22:52.553656    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:52.553656    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:52.553749    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:52.553769    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:52.556486    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:52.556553    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:52.556553    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:52 GMT
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Audit-Id: cf82b9bc-3a7e-4212-8ccb-65aba06cd7ef
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:52.556553    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:52.556642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:52.557341    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:53.048002    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.048002    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:53.048002    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.048002    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.048002    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.053287    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:53.053287    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.053355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Audit-Id: 941709c7-bd59-4426-b3df-bb6130caf07e
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.053355    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.053355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.053760    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:53.054025    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.054061    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:53.054124    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.054124    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.054124    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.056564    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:53.056613    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Audit-Id: a5898114-0d8d-4787-9996-90dd194192bd
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.056613    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.056613    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.056613    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.056888    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:53.057050    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:53.548518    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.548518    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:53.548518    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.548518    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.548518    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.553136    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:53.553136    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Audit-Id: a87d1560-5727-485c-aff9-f9fa53960d7e
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.553136    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.553136    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.553136    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.553505    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:53.554943    5644 type.go:168] "Request Body" body=""
	I0210 12:22:53.555356    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:53.555356    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:53.555356    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:53.555356    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:53.558674    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:53.558747    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Audit-Id: 18dfbd31-50ca-4d25-b94b-6a180dbb33dc
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:53.558747    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:53.558828    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:53.558828    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:53.558828    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:53 GMT
	I0210 12:22:53.559705    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:54.048397    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.048397    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:54.048397    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.048397    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.048397    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.052724    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:54.052724    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.052724    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Audit-Id: 11702046-585b-47c8-a970-b22b1af77eda
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.052724    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.052724    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.053353    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:54.053629    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.053712    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:54.053742    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.053742    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.053742    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.058130    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:54.058130    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.058130    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.058130    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.058130    5644 round_trippers.go:587]     Audit-Id: a7bbd6d3-cc44-4be7-a707-6c9887e98b2a
	I0210 12:22:54.058130    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:54.548128    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.548128    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:54.548128    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.548128    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.548128    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.553247    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:54.553331    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.553331    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.553331    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.553331    5644 round_trippers.go:587]     Audit-Id: ff8245bf-8b8e-4b2f-8f9b-7acfc27ef03c
	I0210 12:22:54.553954    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:54.554252    5644 type.go:168] "Request Body" body=""
	I0210 12:22:54.554252    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:54.554252    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:54.554252    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:54.554375    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:54.556324    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:22:54.556324    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:54.556324    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:54.556324    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:54 GMT
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Audit-Id: c422f000-6efc-4b95-9a2f-568d736b56a3
	I0210 12:22:54.556324    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:54.557329    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.048165    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.048721    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:55.048721    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.048721    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.048721    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.052827    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:55.052827    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.052827    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Audit-Id: 6a173839-7b42-4db0-8d04-966ee075559b
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.052827    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.052827    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.052827    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:55.053487    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.053487    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:55.053487    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.053487    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.053487    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.056758    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:55.056838    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.056838    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.056838    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.056838    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Audit-Id: 0e7a36b8-eb8c-476a-b48d-32cc4fa00c25
	I0210 12:22:55.056912    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.056992    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.548564    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.548705    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:55.548705    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.548705    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.548705    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.551788    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:55.551788    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Audit-Id: 72f0ef6b-d7e6-4d60-94d6-9bd3b5e71b08
	I0210 12:22:55.551788    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.551920    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.551920    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.551920    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.552241    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:55.552402    5644 type.go:168] "Request Body" body=""
	I0210 12:22:55.552533    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:55.552533    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:55.552533    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:55.552533    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:55.555617    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:55.555617    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:55.555617    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:55 GMT
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Audit-Id: 2b0c960f-80d1-4151-97fa-06ee88e575cd
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:55.555617    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:55.555617    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:55.556004    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:55.556214    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:56.048268    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.048493    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:56.048493    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.048493    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.048548    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.052471    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:56.052471    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.052471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.052471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.052471    5644 round_trippers.go:587]     Audit-Id: e4d04e08-f464-48bd-9184-fc8d9a02e2e0
	I0210 12:22:56.053010    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:56.053305    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.053408    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:56.053408    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.053510    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.053510    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.059025    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:22:56.059025    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Audit-Id: 5fecaf1f-5ea9-4e44-a5c9-d4e55218729b
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.059025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.059025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.059025    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.059025    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:56.548852    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.548852    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:56.548852    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.548852    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.548852    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.552997    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:56.553378    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Audit-Id: f8eebaa7-b4cd-4053-a554-3407b8747904
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.553378    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.553378    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.553462    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.553462    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.553462    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:56.553994    5644 type.go:168] "Request Body" body=""
	I0210 12:22:56.554090    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:56.554090    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:56.554090    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:56.554173    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:56.556987    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:56.557178    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Audit-Id: 76d2bd6d-dff9-4b85-a471-91e958692745
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:56.557194    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:56.557245    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:56.557287    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:56.557287    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:56 GMT
	I0210 12:22:56.557780    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.048561    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.048561    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:57.048561    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.048561    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.048561    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.053397    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:57.053471    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Audit-Id: c9ee8187-9b90-4249-9930-9dc98bee14f3
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.053471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.053471    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.053471    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.053770    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:57.053770    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.053770    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:57.053770    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.053770    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.053770    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.060082    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:57.060082    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.060082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.060082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Audit-Id: 7b658c2b-59d9-49a4-b489-11e2acd32642
	I0210 12:22:57.060082    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.060627    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.548244    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.548829    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:57.548829    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.548892    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.548892    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.551234    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:57.552298    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Audit-Id: 89a6055e-8009-46d9-b659-55794830e9c5
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.552298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.552298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.552298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.552638    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:57.552890    5644 type.go:168] "Request Body" body=""
	I0210 12:22:57.552953    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:57.552953    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:57.553021    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:57.553021    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:57.555898    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:57.555993    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:57.555993    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:57.555993    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:57 GMT
	I0210 12:22:57.555993    5644 round_trippers.go:587]     Audit-Id: e9d4323b-d339-4370-88b0-a040f9a9903d
	I0210 12:22:57.557164    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:57.557164    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:22:58.047965    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.047965    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:58.047965    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.047965    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.047965    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.051892    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.051892    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Audit-Id: ec6cabde-3121-4a45-8a7e-6c91844d80d4
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.051892    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.051892    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.051892    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.052626    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:58.052828    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.052828    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:58.052828    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.052828    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.052828    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.056507    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.056507    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.056606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Audit-Id: 042a794d-a59f-43c1-ad36-0eab8040b31e
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.056606    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.056606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.056881    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:58.548014    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.548014    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:58.548014    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.548014    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.548014    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.551055    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:22:58.551914    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Audit-Id: 15462687-af7f-4aca-a9f2-e67b0a0d33b3
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.551914    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.551914    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.551914    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.552240    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:58.552500    5644 type.go:168] "Request Body" body=""
	I0210 12:22:58.552576    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:58.552576    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:58.552576    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:58.552576    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:58.554947    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:58.554947    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:58.554947    5644 round_trippers.go:587]     Audit-Id: 973ac095-f1db-4267-b09b-3d3eb67ca1d1
	I0210 12:22:58.554947    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:58.555803    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:58.555803    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:58.555803    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:58.555803    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:58 GMT
	I0210 12:22:58.556127    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.048191    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.048191    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:59.048191    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.048191    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.048191    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.057777    5644 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0210 12:22:59.057777    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.057777    5644 round_trippers.go:587]     Audit-Id: 41d94b59-f635-4e8f-9d42-c4f47546a84c
	I0210 12:22:59.057777    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.057865    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.057865    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.057865    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.057865    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.057981    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:59.057981    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.057981    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:59.057981    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.057981    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.058506    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.065872    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:22:59.065872    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.065872    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.065872    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Audit-Id: 3ff03e8f-d0eb-4ddd-a3d9-0b436919fddd
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.065872    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.065872    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.548084    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.548084    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:22:59.548084    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.548084    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.548084    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.552528    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:22:59.552528    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Audit-Id: 85b3ded3-e422-459c-8291-c5bc17372e25
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.552528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.552528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.552528    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.552528    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:22:59.553304    5644 type.go:168] "Request Body" body=""
	I0210 12:22:59.553377    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:22:59.553443    5644 round_trippers.go:476] Request Headers:
	I0210 12:22:59.553443    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:22:59.553470    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:22:59.557082    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:22:59.557082    5644 round_trippers.go:584] Response Headers:
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:22:59.557082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:22:59.557082    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:22:59.557082    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:22:59 GMT
	I0210 12:22:59.557161    5644 round_trippers.go:587]     Audit-Id: 7367d7c5-a831-4f7a-8085-f31c40017038
	I0210 12:22:59.557466    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:22:59.557635    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:00.048431    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.048957    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:00.049043    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.049043    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.049043    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.053272    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:00.053272    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.053272    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.053272    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Audit-Id: b2924ee7-ac41-44d2-bee2-c53b53148e28
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.053272    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.053665    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:00.053854    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.053976    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:00.053976    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.053976    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.053976    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.057437    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:00.057437    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Audit-Id: 44e1a383-1ee2-4e91-bd0e-33563e002b77
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.057437    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.057528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.057528    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.057619    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:00.548490    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.548490    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:00.548490    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.548490    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.548490    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.552750    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:00.553071    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Audit-Id: ba0711d8-a55a-43eb-9f6c-b9cba11b3cbb
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.553071    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.553071    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.553071    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.553426    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:00.553802    5644 type.go:168] "Request Body" body=""
	I0210 12:23:00.553802    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:00.553802    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:00.553802    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:00.553915    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:00.556367    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:00.556367    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Audit-Id: d80e7295-a8f4-4061-9209-94ac3405dd42
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:00.556367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:00.556367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:00.556367    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:00 GMT
	I0210 12:23:00.556777    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.048073    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.048073    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:01.048073    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.048073    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.048073    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.051486    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.052195    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Audit-Id: 99f790b5-a737-4068-8d56-0f309f4051cc
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.052195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.052195    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.052195    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.052773    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:01.052888    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.052888    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:01.052888    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.052888    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.052888    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.056272    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.056408    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Audit-Id: 9f2c0f62-34b9-44ec-996e-997da25a3a0c
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.056408    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.056408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.056492    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.056492    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.056548    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.549102    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.549102    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:01.549102    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.549102    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.549102    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.553282    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:01.553282    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.553282    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.553282    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Audit-Id: 5c2d8b73-fd91-4bfc-8b62-d239e9d0332c
	I0210 12:23:01.553282    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.553575    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:01.554223    5644 type.go:168] "Request Body" body=""
	I0210 12:23:01.554359    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:01.554359    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:01.554359    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:01.554359    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:01.557961    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:01.557961    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:01.558178    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:01.558178    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:01 GMT
	I0210 12:23:01.558178    5644 round_trippers.go:587]     Audit-Id: 3589bece-5381-4946-a204-1300d84dbf2f
	I0210 12:23:01.558548    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:01.558723    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:02.048331    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.048331    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:02.048331    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.048331    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.048331    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.055014    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:23:02.055073    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.055073    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.055073    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.055073    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.055073    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.055134    5644 round_trippers.go:587]     Audit-Id: b165bbca-384a-431b-8830-0452153fef67
	I0210 12:23:02.055134    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.055428    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:02.055731    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.055786    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:02.055786    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.055786    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.055786    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.058025    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:02.058025    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.058025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.058025    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Audit-Id: c2bb9363-a8c4-49dd-8e5a-232493e670b0
	I0210 12:23:02.058025    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.058025    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:02.549395    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.549395    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:02.549395    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.549395    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.549395    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.553814    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:02.553921    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Audit-Id: 99203ebc-3386-4ade-8509-46b2e9dcd4b6
	I0210 12:23:02.553921    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.554009    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.554009    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.554009    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.554475    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:02.554992    5644 type.go:168] "Request Body" body=""
	I0210 12:23:02.555090    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:02.555090    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:02.555148    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:02.555148    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:02.557418    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:02.557418    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:02.557418    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:02.558202    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:02 GMT
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Audit-Id: 185e3041-add6-4644-a906-2309767d4b3b
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:02.558202    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:02.558504    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:03.049113    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.049113    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:03.049113    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.049113    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.049113    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.052809    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:03.052877    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.052877    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Audit-Id: a93c8ec3-a514-47f4-8f37-701e970f4b3e
	I0210 12:23:03.053107    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.053168    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.053168    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.053168    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:03.053865    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.053865    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:03.053865    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.053865    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.053865    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.056691    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:03.056691    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.056747    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.056747    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Audit-Id: 0820e2fa-eff9-4ab2-a471-23bb1bf57731
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.056773    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.056773    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:03.549258    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.549258    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:03.549258    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.549258    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.549258    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.553755    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:03.553755    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Audit-Id: 6dfa9630-f0a4-4920-9ff6-d3cf44fb5df0
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.553755    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.553755    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.553755    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.554306    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:03.554519    5644 type.go:168] "Request Body" body=""
	I0210 12:23:03.554590    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:03.554590    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:03.554590    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:03.554655    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:03.558278    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:03.558278    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Audit-Id: 6522ed28-f617-4bec-9aa3-f26b5da41784
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:03.558278    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:03.558278    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:03.558278    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:03 GMT
	I0210 12:23:03.558278    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.049143    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.049143    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:04.049143    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.049143    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.049143    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.054911    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:23:04.054978    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.054978    5644 round_trippers.go:587]     Audit-Id: 7886bdef-47bd-418b-9565-9237dfd751b3
	I0210 12:23:04.054978    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.055052    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.055052    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.055052    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.055052    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.055869    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  84 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 38 32 30 38  |7dbe93e092.18208|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25040 chars]
	 >
	I0210 12:23:04.056138    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.056204    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.056289    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.056289    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.056289    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.060207    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.060289    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.060289    5644 round_trippers.go:587]     Audit-Id: 6ca9fb2f-3a05-410f-a192-803bcdeb324f
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.060367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.060367    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.060367    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.061176    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.061418    5644 pod_ready.go:103] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"False"
	I0210 12:23:04.548642    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.549246    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:23:04.549246    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.549246    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.549246    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.556436    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:23:04.556436    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.556436    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.556436    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Audit-Id: d23bfbcf-b351-4eb4-a187-c001aaebdcb4
	I0210 12:23:04.556436    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.556966    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c5 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 39 37 32 38  |7dbe93e092.19728|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24725 chars]
	 >
	I0210 12:23:04.557200    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.557200    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.557200    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.557200    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.557200    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.561796    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:04.561796    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.561796    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.561796    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.561796    5644 round_trippers.go:587]     Audit-Id: 32ee6243-b96d-4490-8358-1915b6847e1f
	I0210 12:23:04.561796    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.561796    5644 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.561796    5644 pod_ready.go:82] duration metric: took 15.5139075s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.561796    5644 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.561796    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.562900    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:23:04.562900    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.562900    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.562900    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.566121    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.566121    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Audit-Id: bb269c72-e21c-4d2e-9998-0c24d1f25772
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.566121    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.566121    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.566121    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.566121    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  81 2c 0a 9f 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 37  30 38 00 42 08 08 e6 de  |be02.18708.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26933 chars]
	 >
	I0210 12:23:04.566121    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.567130    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.567194    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.567194    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.567194    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.569515    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.569515    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Audit-Id: a10a6b29-83ed-4a67-9a28-0c583aa4201d
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.569515    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.569515    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.569515    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.569870    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.570023    5644 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.570023    5644 pod_ready.go:82] duration metric: took 8.2274ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.570023    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.570140    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.570140    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:23:04.570140    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.570140    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.570140    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.573080    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.573080    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.573080    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.573080    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Audit-Id: f1892471-b8b5-4c99-8590-483d84fbdee2
	I0210 12:23:04.573080    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.573493    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 35 0a af 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 36 36 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8668.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32856 chars]
	 >
	I0210 12:23:04.573728    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.573728    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.573809    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.573809    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.573809    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.576088    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.576088    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.576088    5644 round_trippers.go:587]     Audit-Id: 9f28499a-43a9-464e-9a83-7358b77e06de
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.576670    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.576670    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.576670    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.576869    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.577066    5644 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.577066    5644 pod_ready.go:82] duration metric: took 7.043ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.577066    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.577066    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.577198    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:23:04.577198    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.577198    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.577244    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.584427    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:23:04.584804    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Audit-Id: 77c86ba3-7518-4d03-b00f-0899ac3c7958
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.584804    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.584850    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.584850    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.584850    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.584850    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  df 31 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 38 32 38 00 42 08  |9fb4412.18828.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30565 chars]
	 >
	I0210 12:23:04.584850    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.584850    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.584850    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.584850    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.584850    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.587586    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.587586    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Audit-Id: f06425a9-bfc8-4297-8a8c-2eb8e0679e9e
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.587586    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.587586    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.587586    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.587586    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.588604    5644 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.588604    5644 pod_ready.go:82] duration metric: took 11.537ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.588666    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.588728    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.588784    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:23:04.588784    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.588784    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.588845    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.591026    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.591026    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Audit-Id: 86c73263-78de-48e3-a507-8c865d0a1f99
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.591026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.591026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.591026    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.591026    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:23:04.591026    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.591026    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:04.591026    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.592049    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.592049    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.594230    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:23:04.594230    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.594230    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.594230    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.594230    5644 round_trippers.go:587]     Audit-Id: 1a201614-2286-469c-a761-ec903516c3ef
	I0210 12:23:04.594787    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:04.594875    5644 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:04.594875    5644 pod_ready.go:82] duration metric: took 6.2094ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.594926    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.594926    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.749138    5644 request.go:661] Waited for 154.1407ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:23:04.749138    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:23:04.749138    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.749138    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.749138    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.752889    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.752959    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Audit-Id: e9ee84e5-6b97-4124-b844-d6a9045602da
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.752959    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.752959    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.752959    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.756913    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:23:04.757099    5644 type.go:168] "Request Body" body=""
	I0210 12:23:04.949585    5644 request.go:661] Waited for 192.4835ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:23:04.949585    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:23:04.949991    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:04.949991    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:04.950038    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:04.953804    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:04.953804    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:04.953804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:04.953804    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:04 GMT
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Audit-Id: e18d113b-8d2c-4579-9abc-0d57b2ac43b5
	I0210 12:23:04.953804    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:04.953804    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:23:04.954336    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:23:04.954336    5644 pod_ready.go:82] duration metric: took 359.4056ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:23:04.954336    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:23:04.954336    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:04.954428    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.149324    5644 request.go:661] Waited for 194.8134ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:23:05.149324    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:23:05.149324    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.149324    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.149324    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.153248    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.153248    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.153248    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.153248    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Audit-Id: 5d611965-8565-4d4b-a8e1-3414a6f9670a
	I0210 12:23:05.153248    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.153767    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 04 31 39 34 31 38  |0d435af832.19418|
		00000070  00 42 08 08 d0 d5 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:23:05.153951    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.349864    5644 request.go:661] Waited for 195.9106ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:23:05.349864    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:23:05.349864    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.349864    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.349864    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.352992    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.352992    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.353979    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.353979    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Content-Length: 4039
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Audit-Id: 45e2b683-0eee-4959-870a-11626cfadfed
	I0210 12:23:05.353979    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.354441    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 1f 0a f9 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 62 30 35 36  31 63 32 32 2d 64 62 66  |".*$b0561c22-dbf|
		00000040  32 2d 34 32 61 30 2d 62  64 66 33 2d 34 65 30 61  |2-42a0-bdf3-4e0a|
		00000050  62 37 61 39 61 66 30 65  32 04 31 39 35 31 38 00  |b7a9af0e2.19518.|
		00000060  42 08 08 d0 d5 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18954 chars]
	 >
	I0210 12:23:05.354610    5644 pod_ready.go:98] node "multinode-032400-m02" hosting pod "kube-proxy-xltxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m02" has status "Ready":"Unknown"
	I0210 12:23:05.354701    5644 pod_ready.go:82] duration metric: took 400.3605ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	E0210 12:23:05.354760    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m02" hosting pod "kube-proxy-xltxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m02" has status "Ready":"Unknown"
	I0210 12:23:05.354760    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:05.354760    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.549024    5644 request.go:661] Waited for 194.2627ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:23:05.549024    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:23:05.549024    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.549024    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.549024    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.553292    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:05.553412    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Audit-Id: a62493db-9831-47bd-ba0d-430197aabcc9
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.553412    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.553412    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.553412    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.553709    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ea 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 37 38 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8788.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21728 chars]
	 >
	I0210 12:23:05.553968    5644 type.go:168] "Request Body" body=""
	I0210 12:23:05.748898    5644 request.go:661] Waited for 194.8641ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:05.748898    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:23:05.748898    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:05.748898    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:05.748898    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:05.752583    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:05.752583    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:05.752668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:05.752668    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:05 GMT
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Audit-Id: d68c1b1d-4902-42c5-acde-dcd187448f59
	I0210 12:23:05.752668    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:05.752937    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:23:05.753140    5644 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:23:05.753140    5644 pod_ready.go:82] duration metric: took 398.3762ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:23:05.753194    5644 pod_ready.go:39] duration metric: took 16.7112371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:23:05.753194    5644 api_server.go:52] waiting for apiserver process to appear ...
	I0210 12:23:05.761213    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:05.786460    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:05.788040    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:05.795134    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:05.820378    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:05.823355    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:05.833034    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:05.859437    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:05.859437    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:05.861148    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:05.868511    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:05.891115    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:05.891115    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:05.891115    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:05.899436    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:05.923653    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:05.924565    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:05.925704    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:05.932215    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:05.959696    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:05.959696    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:05.959696    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:05.967720    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:05.994331    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:05.994405    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:05.994405    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:05.994405    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:05.994477    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:06.022795    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.022795    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:06.023041    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.023100    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:06.023127    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:06.023867    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:06.024399    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:06.024399    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:06.024450    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:06.024483    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:06.024503    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:06.024582    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.024631    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:06.024655    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:06.025185    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:06.025237    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:06.025767    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:06.025813    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:06.025813    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:06.025852    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:06.025890    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.025890    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:06.025922    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:06.026562    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027100    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:06.027150    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:06.045707    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:06.045707    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:06.395456    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:06.395456    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:06.395875    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.395875    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.395875    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.395930    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:06.395930    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.396006    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:06.396066    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:06.396126    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.396171    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:06.396171    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:06.396227    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.396227    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.396227    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.396287    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:06.396287    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:06.396287    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.396352    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.396352    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:06.396413    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.396413    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:22:59 +0000
	I0210 12:23:06.396413    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.396478    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:06.396538    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:06.396538    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:06.396602    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:06.396662    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:06.396725    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:06.396785    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.396785    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:06.396785    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:06.396851    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.396851    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.396912    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.396912    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.396912    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.396976    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.396976    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.396976    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.397036    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.397036    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.397101    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.397101    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.397101    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.397101    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:06.397163    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:06.397226    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:06.397226    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.397226    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.397286    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.397353    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.397414    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.397414    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:06.397414    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:06.397471    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:06.397471    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.397531    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.397594    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:06.397594    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0210 12:23:06.397655    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0210 12:23:06.397718    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0210 12:23:06.397718    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0210 12:23:06.397777    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397842    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397842    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:06.397903    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:06.397966    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.397966    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.397966    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:06.398026    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:06.398090    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:06.398090    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:06.398090    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:06.398150    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:06.398150    5644 command_runner.go:130] > Events:
	I0210 12:23:06.398150    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:06.398215    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:06.398277    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:06.398277    5644 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0210 12:23:06.398341    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398341    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398402    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398466    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398466    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398526    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    23m (x2 over 23m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398526    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.398590    5644 command_runner.go:130] >   Normal   Starting                 23m                kubelet          Starting kubelet.
	I0210 12:23:06.398590    5644 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:06.398651    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:06.398714    5644 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0210 12:23:06.398714    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.398774    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:06.398837    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.398837    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:06.398897    5644 command_runner.go:130] >   Warning  Rebooted                 68s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:06.398960    5644 command_runner.go:130] >   Normal   RegisteredNode           65s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:06.398960    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:06.399020    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:06.399020    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.399020    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.399020    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.399084    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.399147    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.399212    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:06.399273    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:06.399273    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.399331    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.399392    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.399450    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.399450    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:06.399450    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:06.399511    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:06.399511    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.399511    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.399576    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:06.399576    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.399576    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:06.399638    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.399638    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:06.399703    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:06.399763    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399763    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399826    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399886    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.399886    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.399942    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:06.399942    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:06.399942    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.399942    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.400002    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.400002    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.400002    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.400067    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.400067    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.400067    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.400128    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.400128    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.400185    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.400185    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.400246    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.400246    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:06.400246    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:06.400309    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:06.400309    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.400309    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.400375    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.400438    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.400498    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.400498    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:06.400498    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:06.400563    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:06.400563    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.400622    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.400685    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:06.400685    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:06.400746    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:06.400746    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.400809    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.400809    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:06.400870    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:06.400870    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:06.400870    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:06.400934    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:06.400934    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:06.400995    5644 command_runner.go:130] > Events:
	I0210 12:23:06.400995    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:06.400995    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:06.401059    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:06.401059    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:06.401119    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.401183    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:06.401243    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.401243    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:06.401306    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:06.401306    5644 command_runner.go:130] >   Normal  RegisteredNode           65s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:06.401366    5644 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:06.401366    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:06.401430    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:06.401430    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:06.401430    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:06.401790    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:06.401790    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:06.401834    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:06.401834    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:06.401901    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:06.401962    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:06.402026    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:06.402026    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:06.402026    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:06.402096    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:06.402096    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:06.402154    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:06.402154    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:06.402154    5644 command_runner.go:130] > Lease:
	I0210 12:23:06.402154    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:06.402223    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:06.402278    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:06.402278    5644 command_runner.go:130] > Conditions:
	I0210 12:23:06.402338    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:06.402338    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:06.402392    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402480    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402513    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402567    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:06.402626    5644 command_runner.go:130] > Addresses:
	I0210 12:23:06.402626    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:06.402789    5644 command_runner.go:130] > Capacity:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.402789    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:06.402789    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:06.402789    5644 command_runner.go:130] > System Info:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:06.402789    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:06.402789    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:06.402789    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:06.402789    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:06.402789    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:06.402789    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:06.402789    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:06.402789    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:06.402789    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:06.402789    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:06.402789    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:06.402789    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:06.403338    5644 command_runner.go:130] > Events:
	I0210 12:23:06.403338    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:06.403412    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:06.403412    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  Starting                 5m31s                  kube-proxy       
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m35s (x2 over 5m36s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  RegisteredNode           5m34s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeReady                5m20s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  NodeNotReady             3m39s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:06.403467    5644 command_runner.go:130] >   Normal  RegisteredNode           65s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:06.412778    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:06.412778    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:06.444858    5644 command_runner.go:130] > .:53
	I0210 12:23:06.445192    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:06.445192    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:06.445192    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:06.445254    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:06.445317    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:06.445356    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:06.445411    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:06.445411    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:06.445554    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:06.445592    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:06.445634    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:06.445675    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:06.445752    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:06.445786    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:06.445786    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:06.445817    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:06.449513    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:06.449513    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:06.484521    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:06.484571    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.484627    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:06.484627    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:06.484660    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.484660    5644 command_runner.go:130] !  >
	I0210 12:23:06.484660    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.484660    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:06.484711    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:06.484743    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.484743    5644 command_runner.go:130] !  >
	I0210 12:23:06.484808    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:06.484808    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:06.484847    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:06.484921    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:06.484921    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:06.484962    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:06.485002    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:06.485002    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.485042    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:06.485042    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:06.485081    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:06.485121    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:06.485161    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:06.486869    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:06.486901    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:06.526784    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:06.526784    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.526881    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.526971    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527022    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527546    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527546    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527617    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527617    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527656    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527739    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527828    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.527916    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.527993    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528070    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528148    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528225    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528302    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:06.528379    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528456    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528532    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528609    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528686    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528762    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528839    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528916    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:06.528992    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529069    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:06.529145    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529223    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:06.529299    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529377    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529453    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529530    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:06.529607    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529684    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529761    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.529846    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.529925    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530010    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530088    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:06.530166    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530250    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530250    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530338    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530476    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530541    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530612    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530702    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530797    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530797    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:06.530820    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530820    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:06.530879    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.530951    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531045    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531135    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531135    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531163    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531689    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531772    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531851    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531928    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531961    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:06.531961    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.531991    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532517    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532598    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532633    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532633    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.532665    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533191    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533268    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.533301    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.550014    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:06.550014    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576472    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576699    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:06.576699    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576733    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576733    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576778    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:06.576811    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577344    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577452    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577554    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:06.577645    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:06.577782    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.577872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.577964    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578053    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578164    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578294    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578380    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:06.578465    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:06.578543    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:06.578620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:06.578620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:06.578697    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:06.581922    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.582455    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:06.582506    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:06.582548    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:06.582583    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:06.583125    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:06.583175    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583216    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:06.583216    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.583251    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:06.583802    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:06.583802    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:06.583845    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:06.583912    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:06.584447    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:06.584499    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:06.584531    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.584604    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585249    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585292    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585292    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585362    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585412    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585483    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585483    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585548    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585548    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585614    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585614    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585675    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585780    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585847    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585912    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.585995    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586063    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586828    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.586960    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:06.586960    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587022    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:06.587094    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.587165    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:06.615728    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:06.615728    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:06.646088    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:06.646911    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:06.646911    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:06.646958    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.646958    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:06.647003    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:06.647036    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:06.647036    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:06.647089    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.647294    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.647992    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:06.648026    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648150    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:06.648150    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:06.648827    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:06.649606    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:06.649606    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:06.658348    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:06.658420    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:06.686206    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:06.686508    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:06.686545    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:06.686598    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:06.686659    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:06.687205    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687246    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:06.687297    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:06.687329    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:06.687358    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:06.687911    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687911    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:06.687970    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:06.687970    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:06.695037    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:06.695037    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:06.735927    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:06.736011    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:06.736577    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:06.736667    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:06.736745    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:06.736822    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:06.736899    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:06.736963    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:06.737028    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:06.737095    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:06.737163    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:06.737233    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:06.737233    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:06.737233    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:06.737296    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:06.737296    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:06.737358    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:06.737387    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:06.737387    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:06.737424    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:06.737463    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:06.737497    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:06.737497    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:06.737536    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.737536    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:06.737570    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:06.737570    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:06.737608    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:06.737608    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:06.737656    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:06.737719    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.737792    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:06.737855    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.737926    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:06.737985    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738048    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:06.738115    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:06.738182    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:06.738182    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:06.738246    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:06.738312    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:06.738382    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:06.738446    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:06.738512    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:06.738577    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:06.738604    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:06.738673    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:06.738742    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:06.738807    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:06.738874    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:06.738941    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:06.739009    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:06.739075    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:06.739141    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:06.739206    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:06.739280    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:06.739348    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:06.739423    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:06.739495    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:06.739569    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:06.739602    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:06.739602    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.739639    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:06.739639    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:06.739670    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:06.739670    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:06.739698    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:06.739733    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:06.739786    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:06.739786    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:06.739822    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:06.739861    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:06.739861    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:06.739896    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:06.739896    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:06.739935    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:06.739969    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:06.740009    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:06.740044    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:06.740083    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:06.740083    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740118    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740648    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:06.740742    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.740830    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741364    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741364    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.741406    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741406    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741440    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.741470    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:06.763529    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:06.763529    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:06.796092    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:06.797412    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.797412    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:06.798055    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798103    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.798103    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798661    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.798768    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.798768    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799384    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.799384    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.799944    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:06.800307    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:06.800307    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.800868    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:06.800996    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:06.814618    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:06.814618    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:06.875571    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:06.875571    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:06.875571    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:06.875571    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:06.875571    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:06.875571    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:06.875571    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:06.875571    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:06.875571    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:06.875571    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:06.875571    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:06.876102    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:06.876102    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:06.876102    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:06.881046    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:06.881046    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:06.909633    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:06.910472    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:06.910556    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:06.910556    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:06.913348    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:06.913382    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:06.946937    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:06.946937    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:06.946937    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.946937    5644 command_runner.go:130] !  >
	I0210 12:23:06.946937    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:06.947310    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.947310    5644 command_runner.go:130] !  >
	I0210 12:23:06.947310    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:06.947359    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:06.947359    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:06.947403    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:06.947403    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:06.947527    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:06.947603    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:06.947626    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:06.950711    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:06.950777    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:06.981433    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:06.982126    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:06.982126    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:06.982216    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:06.982216    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:06.982216    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:06.982216    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:06.982216    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982302    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982389    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:06.982475    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:06.982555    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:06.986727    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:06.986838    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:07.018971    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.018971    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019032    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019032    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019074    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:07.019102    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:07.019630    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:07.019672    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:07.019709    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:07.019747    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:07.019747    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.019783    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.019907    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:07.019907    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:07.019971    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.019996    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:07.020060    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:07.020136    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:07.020211    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:07.020211    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:07.020290    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:07.020364    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:07.020439    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020513    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:07.020587    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:07.020662    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:07.020738    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:07.020812    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:07.020887    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:07.020962    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:07.020962    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:07.021036    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:07.021111    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:07.021185    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:07.021260    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.021333    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:07.021410    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:07.021486    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021486    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021561    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.021635    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.021708    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021708    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.021782    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.021856    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021930    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.021930    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022004    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022079    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022241    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022241    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022320    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022320    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022396    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022396    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:07.022469    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:07.022542    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:07.022542    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022616    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022616    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022690    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022690    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022764    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:07.022764    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022838    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:07.022912    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.022985    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023058    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:07.023131    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:07.023205    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:07.023282    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:07.023282    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:07.023355    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.023387    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:07.023409    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:07.023936    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.023936    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024010    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024010    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.024043    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025379    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025903    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.025903    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.025975    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026007    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026053    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026084    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026124    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026155    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026226    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026266    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026300    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026340    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026413    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026445    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026485    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026517    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026517    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026587    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026659    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026698    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.026730    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027253    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027253    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027346    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.027419    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.027982    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028024    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:07.028620    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:07.029143    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:07.029175    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:07.075794    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:07.075794    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:07.098642    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:07.098642    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:07.098692    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:07.098692    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:07.098819    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:07.098878    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:07.098878    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:07.098878    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:07.100412    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:07.100412    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:07.132442    5644 command_runner.go:130] > .:53
	I0210 12:23:07.132442    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:07.132442    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:07.132442    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:07.132442    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:09.641102    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:23:09.667905    5644 command_runner.go:130] > 2008
	I0210 12:23:09.667905    5644 api_server.go:72] duration metric: took 1m6.4207823s to wait for apiserver process to appear ...
	I0210 12:23:09.667905    5644 api_server.go:88] waiting for apiserver healthz status ...
	I0210 12:23:09.673765    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:09.706089    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:09.706215    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:09.712941    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:09.742583    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:09.742583    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:09.749585    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:09.772165    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:09.772165    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:09.773174    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:09.780166    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:09.806181    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:09.806613    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:09.806889    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:09.815130    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:09.845825    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:09.845825    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:09.845825    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:09.853235    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:09.879290    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:09.879290    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:09.883447    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:09.892307    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:09.920491    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:09.920491    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:09.920861    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:09.920861    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:09.920942    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:09.955462    5644 command_runner.go:130] > .:53
	I0210 12:23:09.955529    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:09.955529    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:09.955529    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:09.955529    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:09.955589    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:09.955589    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:09.955620    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:09.955657    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:09.955730    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:09.955730    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:09.955781    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:09.955801    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:09.955869    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:09.955936    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:09.956003    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:09.956072    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:09.956145    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:09.956145    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:09.956212    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:09.956278    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:09.956345    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:09.959761    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:09.959827    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:09.987722    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:09.987809    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:09.987924    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:09.987924    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:09.988011    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:09.990942    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:09.991023    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:10.021564    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:10.021639    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.021639    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:10.021707    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:10.021707    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.021707    5644 command_runner.go:130] !  >
	I0210 12:23:10.021734    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:10.021759    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.021759    5644 command_runner.go:130] !  >
	I0210 12:23:10.021759    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:10.021830    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:10.021830    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:10.021830    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:10.021906    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:10.021927    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:10.021927    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:10.021991    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:10.022011    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:10.022075    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:10.022140    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:10.022140    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:10.028111    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:10.028111    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063433    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:10.063529    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:10.063585    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:10.064110    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:10.064152    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:10.064705    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:10.064847    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:10.064908    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:10.064908    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:10.064964    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:10.064964    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:10.065006    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.065042    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.065114    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:10.065114    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:10.065154    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:10.065154    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:10.065190    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:10.065190    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:10.065228    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:10.065251    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:10.065278    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:10.065302    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:10.065302    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:10.065386    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:10.065415    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:10.065415    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:10.065451    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:10.065451    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:10.065521    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:10.065539    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.065587    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.066113    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.066182    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.066198    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:10.066248    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066769    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066915    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066915    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.066992    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067056    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067138    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067202    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:10.067202    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067268    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067331    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.067399    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:10.067399    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:10.067466    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:10.067536    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:10.067607    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:10.067607    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:10.067671    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067737    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:10.067829    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:10.067898    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:10.067962    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.067962    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:10.068028    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.068028    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:10.068094    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068094    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068160    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:10.068222    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:10.068222    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:10.068294    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068294    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068355    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068408    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068467    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.068514    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069046    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069046    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069143    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069143    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069208    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.069269    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.069339    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069339    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069400    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069466    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069466    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069533    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069533    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069600    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069600    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069675    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069696    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069696    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069765    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069765    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.069836    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.070372    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070372    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070439    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070473    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.070504    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071021    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071021    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071102    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:10.071132    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:10.071656    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:10.119694    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:10.119694    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:10.155072    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:10.155155    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:10.155155    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:10.155155    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:10.155244    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.155244    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:10.155244    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:10.155301    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155301    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155377    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.155452    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.155452    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155509    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:10.155573    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.155634    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.157636    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:10.157636    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:10.187260    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187613    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.187685    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188224    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188325    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188405    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:10.188405    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:10.188433    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188580    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188640    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188695    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:10.188779    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188814    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188814    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.188850    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.188907    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.188946    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.189002    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.189662    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190314    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190423    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190521    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190609    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:10.190675    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190704    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190787    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.190815    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191348    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191393    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191439    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191505    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191597    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:10.191661    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191661    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191725    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191790    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.191897    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192466    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192466    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:10.192515    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.192552    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.192590    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.192590    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:10.192626    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.192665    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.192665    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.192712    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.192712    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:10.195508    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196055    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196102    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196628    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:10.196710    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196710    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196736    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.196759    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197290    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197330    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197375    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197375    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197397    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197397    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:10.197440    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:10.215323    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:10.215323    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:10.280244    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:10.280244    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:10.280339    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:10.280339    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:10.280339    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:10.280416    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:10.280416    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:10.280474    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:10.280499    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:10.280499    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:10.280564    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:10.280564    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:10.280628    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:10.280628    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:10.280628    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:10.280704    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:10.280704    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:10.282993    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:10.282993    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:10.310270    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:10.310345    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.310444    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:10.310515    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.310592    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:10.310592    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310659    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:10.310761    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310761    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.310829    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310900    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.310900    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.310965    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.310965    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311037    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311037    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311100    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:10.311169    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311169    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:10.311243    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311260    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311330    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311330    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.311400    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311400    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.311471    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311471    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311543    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311543    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:10.311615    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311692    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.311692    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311757    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.311757    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311829    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.311892    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311892    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.311962    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.311962    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:10.312027    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312027    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:10.312093    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312093    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.312160    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312160    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:10.312251    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312321    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:10.312321    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:10.312388    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:10.312452    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312452    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.312521    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312521    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:10.312583    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.312583    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:10.313229    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:10.313229    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:10.326226    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:10.326226    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:10.354224    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.354224    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:10.354543    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:10.354672    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:10.354672    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:10.354794    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:10.354794    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:10.354875    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:10.354875    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:10.355414    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:10.355521    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:10.355599    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:10.355599    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:10.356136    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:10.356136    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:10.356222    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:10.356222    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:10.356287    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:10.356373    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:10.356911    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.357015    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357048    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:10.357070    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:10.357070    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.357120    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357190    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.357260    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.357301    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:10.357301    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:10.357380    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:10.357380    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:10.357411    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:10.357482    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:10.357548    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:10.357626    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:10.357689    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:10.357760    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.357823    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:10.357901    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:10.357963    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:10.358031    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:10.358031    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:10.358093    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:10.358169    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:10.358228    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.358294    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:10.358356    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:10.358430    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:10.358491    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.358532    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:10.358532    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:10.358581    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:10.358646    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:10.358734    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:10.358734    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:10.358764    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:10.358764    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:10.358812    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:10.358881    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:10.358946    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:10.359021    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.359096    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:10.359152    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:10.359234    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359295    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:10.359360    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:10.359360    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:10.359425    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.359501    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:10.359561    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:10.359637    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:10.359637    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.359697    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.359773    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359834    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.359899    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:10.359958    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:10.360038    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:10.383067    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:10.383067    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.406064    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.406828    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:10.406933    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.406933    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.406960    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.406960    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.407252    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.407308    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407371    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.407431    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.407431    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.407480    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.407480    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:10.407513    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:10.407550    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408090    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408090    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408179    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408179    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408243    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408307    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408370    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408370    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:10.408498    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:10.409130    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:10.410122    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:10.411121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.412121    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.413122    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.414123    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:10.443126    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:10.443126    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:10.472132    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:10.472344    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:10.472882    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:10.472973    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:10.473002    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473002    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:10.473057    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:10.473121    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:10.473190    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:10.473190    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:10.473253    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:10.473331    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:10.473389    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:10.473470    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:10.473506    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:10.473506    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:10.473570    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:10.473570    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:10.473638    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:10.473638    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473700    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:10.473770    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:10.473770    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:10.481954    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:10.481954    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:10.511959    5644 command_runner.go:130] > .:53
	I0210 12:23:10.511959    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:10.511959    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:10.511959    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:10.511959    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:10.512310    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:10.512381    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:10.545893    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:10.546186    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:10.546186    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.546186    5644 command_runner.go:130] !  >
	I0210 12:23:10.546263    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:10.546263    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:10.546308    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:10.546308    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:10.546308    5644 command_runner.go:130] !  >
	I0210 12:23:10.546308    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:10.546308    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:10.546389    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:10.546419    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:10.546493    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:10.546493    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:10.546522    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:10.546592    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:10.546623    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:10.546686    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:10.546686    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:10.549890    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:10.549890    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:10.577900    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.578025    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.578107    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:10.578179    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:10.578179    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:10.578239    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:10.578263    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:10.578315    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:10.578367    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:10.578395    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:10.578395    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:10.579100    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:10.579175    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:10.579175    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:10.579230    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:10.579230    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:10.579257    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:10.579322    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:10.579346    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:10.579406    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:10.579473    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:10.579534    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:10.579593    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:10.579653    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:10.579749    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:10.579815    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:10.579875    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:10.579935    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:10.580000    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:10.580032    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:10.580032    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:10.580102    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:10.580102    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:10.580159    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:10.580192    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:10.580213    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:10.580213    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:10.580213    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:10.580277    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:10.580334    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:10.581072    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:10.581131    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:10.581205    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:10.581205    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:10.581243    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.581283    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:10.581283    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:10.581327    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:10.581327    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:10.581327    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:10.581393    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:10.581462    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.581527    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:10.581592    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:10.581657    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581763    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:10.581830    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:10.581908    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:10.581976    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:10.582041    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:10.582106    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:10.582171    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:10.582238    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:10.582238    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:10.582276    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:10.582276    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:10.582317    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:10.582317    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:10.582342    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:10.582370    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:10.582403    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.582435    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:10.582435    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:10.582463    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:10.583000    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:10.583044    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:10.583101    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.583165    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:10.583165    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:10.583189    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:10.583256    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:10.583324    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:10.583389    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:10.583457    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:10.583521    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:10.583583    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:10.583648    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:10.583708    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:10.583730    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583757    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583757    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.583793    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:10.583793    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:10.583832    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:10.583865    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:10.583865    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:10.583903    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:10.583903    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.583959    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:10.584038    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584106    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584106    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584172    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584172    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:10.584242    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:10.584320    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:10.584390    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584462    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:10.584530    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584530    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:10.584596    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584664    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584737    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584801    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584865    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584897    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584897    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.584930    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.584953    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.584974    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.585007    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585045    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585077    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585077    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585107    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.585134    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:10.606799    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:10.606799    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:10.628398    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:10.628398    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:10.628918    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:10.628949    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:10.628949    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:10.629009    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:10.629009    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:10.629009    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:10.629009    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:10.630799    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:10.630799    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:10.813774    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:10.813774    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:10.813848    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.813848    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:10.813910    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:10.813968    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.813968    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.813968    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.814031    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:10.814031    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:10.814031    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.814031    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.814031    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:10.814031    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.814031    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:23:09 +0000
	I0210 12:23:10.814092    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.814092    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:10.814092    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:10.814092    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:10.814151    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:10.814188    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:10.814202    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:10.814202    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.814202    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:10.814251    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:10.814284    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.814296    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.814296    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.814296    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.814296    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.814296    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.814296    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.814296    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.814379    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.814379    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.814379    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.814379    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.814379    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:10.814438    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.814438    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.814438    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.814500    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.814500    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:10.814559    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:10.814580    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:10.814580    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.814606    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:10.814606    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:10.814606    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.814606    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:10.814606    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:10.814606    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:10.814606    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:10.814606    5644 command_runner.go:130] > Events:
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:10.814606    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 69s                kube-proxy       
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   RegisteredNode           23m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  77s (x8 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    77s (x8 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     77s (x7 over 78s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Warning  Rebooted                 72s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:10.814606    5644 command_runner.go:130] >   Normal   RegisteredNode           69s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:10.814606    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:10.815133    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:10.815133    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:10.815133    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.815203    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.815203    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:10.815203    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:10.815203    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:10.815203    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.815203    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:10.815203    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.815203    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:10.815203    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:10.815203    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.815203    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:10.815203    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.815203    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.815203    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.815203    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.815203    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:10.815203    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.815203    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.815203    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:10.815203    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:10.815203    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:10.815203    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.815742    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.815742    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:10.815742    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:10.815742    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:10.815742    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.815742    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:10.815822    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:10.815822    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:10.815822    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:10.815880    5644 command_runner.go:130] > Events:
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:10.815880    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:10.815880    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:10.815937    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.815937    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:10.815996    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.816017    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  RegisteredNode           69s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Normal  NodeNotReady             19s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:10.816043    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:10.816043    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:10.816043    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:10.816043    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:10.816043    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:10.816043    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:10.816043    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:10.816043    5644 command_runner.go:130] > Lease:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:10.816043    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:10.816043    5644 command_runner.go:130] > Conditions:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:10.816043    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:10.816043    5644 command_runner.go:130] > Addresses:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:10.816043    5644 command_runner.go:130] > Capacity:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.816043    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.816043    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:10.816043    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:10.816043    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:10.816043    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:10.816043    5644 command_runner.go:130] > System Info:
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:10.816043    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:10.816043    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:10.816564    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:10.816564    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:10.816564    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:10.816564    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:10.816564    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:10.816671    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:10.816671    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:10.816671    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:10.816671    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:10.816745    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:10.816745    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:10.816745    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:10.816805    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:10.816827    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:10.816827    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:10.816854    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:10.816854    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:10.816854    5644 command_runner.go:130] > Events:
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:10.816889    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  Starting                 5m36s                  kube-proxy       
	I0210 12:23:10.816889    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:10.816965    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:10.817025    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m39s (x2 over 5m40s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:10.817084    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m39s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  RegisteredNode           5m38s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  NodeReady                5m24s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  NodeNotReady             3m43s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:10.817132    5644 command_runner.go:130] >   Normal  RegisteredNode           69s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:10.827101    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:10.827101    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:10.857019    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:10.857019    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:10.857101    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:10.857188    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:10.857188    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:10.857276    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:10.857306    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857306    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:10.857306    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:10.857395    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857395    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.857467    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:10.857467    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.857467    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:10.857611    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:10.857611    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:10.858103    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858103    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:10.858175    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858175    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:10.858175    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858249    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:10.858346    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:10.858346    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:10.858419    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858419    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858419    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858482    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:10.858482    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:10.858550    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:10.858619    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:10.858687    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:10.858687    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:10.858756    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858756    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:10.858823    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:10.858823    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:10.858823    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:10.858890    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:10.858957    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:10.858957    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:10.859021    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:10.859092    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:10.859157    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:10.859157    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:10.859223    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:10.859294    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:10.859312    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:10.859382    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:10.859450    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:10.859513    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:10.859584    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:10.859648    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:10.859713    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:10.859786    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:10.859786    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:10.859786    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:10.859854    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:10.859918    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:13.369944    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:23:13.381445    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:23:13.381779    5644 discovery_client.go:658] "Request Body" body=""
	I0210 12:23:13.381869    5644 round_trippers.go:470] GET https://172.29.129.181:8443/version
	I0210 12:23:13.381869    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:13.381869    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:13.381869    5644 round_trippers.go:480]     Accept: application/json, */*
	I0210 12:23:13.383308    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:23:13.383348    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:13 GMT
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Audit-Id: cab52939-882c-4f1b-a25c-e9ab6bc73e40
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Content-Type: application/json
	I0210 12:23:13.383370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:13.383370    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:13.383370    5644 round_trippers.go:587]     Content-Length: 263
	I0210 12:23:13.383370    5644 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.1",
		  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
		  "gitTreeState": "clean",
		  "buildDate": "2025-01-15T14:31:55Z",
		  "goVersion": "go1.23.4",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0210 12:23:13.383370    5644 api_server.go:141] control plane version: v1.32.1
	I0210 12:23:13.383370    5644 api_server.go:131] duration metric: took 3.7154241s to wait for apiserver health ...
	I0210 12:23:13.383370    5644 system_pods.go:43] waiting for kube-system pods to appear ...
	I0210 12:23:13.390216    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0210 12:23:13.421102    5644 command_runner.go:130] > f368bd876774
	I0210 12:23:13.421102    5644 logs.go:282] 1 containers: [f368bd876774]
	I0210 12:23:13.428840    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0210 12:23:13.451060    5644 command_runner.go:130] > 2c0b97381825
	I0210 12:23:13.452855    5644 logs.go:282] 1 containers: [2c0b97381825]
	I0210 12:23:13.460558    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0210 12:23:13.491429    5644 command_runner.go:130] > 9240ce80f94c
	I0210 12:23:13.491512    5644 command_runner.go:130] > c5b854dbb912
	I0210 12:23:13.491551    5644 logs.go:282] 2 containers: [9240ce80f94c c5b854dbb912]
	I0210 12:23:13.498691    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0210 12:23:13.525667    5644 command_runner.go:130] > 440b6adf4512
	I0210 12:23:13.525667    5644 command_runner.go:130] > adf520f9b9d7
	I0210 12:23:13.525667    5644 logs.go:282] 2 containers: [440b6adf4512 adf520f9b9d7]
	I0210 12:23:13.532985    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0210 12:23:13.559926    5644 command_runner.go:130] > 6640b4e3d696
	I0210 12:23:13.559926    5644 command_runner.go:130] > 148309413de8
	I0210 12:23:13.560485    5644 logs.go:282] 2 containers: [6640b4e3d696 148309413de8]
	I0210 12:23:13.567497    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0210 12:23:13.593289    5644 command_runner.go:130] > bd1666238ae6
	I0210 12:23:13.594118    5644 command_runner.go:130] > 9408ce83d7d3
	I0210 12:23:13.594186    5644 logs.go:282] 2 containers: [bd1666238ae6 9408ce83d7d3]
	I0210 12:23:13.601618    5644 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0210 12:23:13.629494    5644 command_runner.go:130] > efc2d4164d81
	I0210 12:23:13.629585    5644 command_runner.go:130] > 4439940fa5f4
	I0210 12:23:13.629585    5644 logs.go:282] 2 containers: [efc2d4164d81 4439940fa5f4]
	I0210 12:23:13.629585    5644 logs.go:123] Gathering logs for kindnet [efc2d4164d81] ...
	I0210 12:23:13.629585    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 efc2d4164d81"
	I0210 12:23:13.658280    5644 command_runner.go:130] ! I0210 12:22:00.982083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0210 12:23:13.658671    5644 command_runner.go:130] ! I0210 12:22:00.988632       1 main.go:139] hostIP = 172.29.129.181
	I0210 12:23:13.658671    5644 command_runner.go:130] ! podIP = 172.29.129.181
	I0210 12:23:13.658711    5644 command_runner.go:130] ! I0210 12:22:00.988765       1 main.go:148] setting mtu 1500 for CNI 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:00.988782       1 main.go:178] kindnetd IP family: "ipv4"
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:00.988794       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:01.772362       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0210 12:23:13.658738    5644 command_runner.go:130] ! add table inet kindnet-network-policies
	I0210 12:23:13.658738    5644 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:13.658738    5644 command_runner.go:130] ! , skipping network policies
	I0210 12:23:13.658738    5644 command_runner.go:130] ! W0210 12:22:31.784106       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0210 12:23:13.658738    5644 command_runner.go:130] ! E0210 12:22:31.784373       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.780982       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.781097       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782315       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782348       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.782670       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.143.51 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.783201       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.783373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:41.784331       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774354       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774813       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.774839       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.775059       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:22:51.775140       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774212       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774322       1 main.go:301] handling current node
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774342       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774349       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774804       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:01.774919       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.781644       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.781815       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.658738    5644 command_runner.go:130] ! I0210 12:23:11.782562       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:23:13.659268    5644 command_runner.go:130] ! I0210 12:23:11.782912       1 main.go:301] handling current node
	I0210 12:23:13.659268    5644 command_runner.go:130] ! I0210 12:23:11.783348       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.659313    5644 command_runner.go:130] ! I0210 12:23:11.783495       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.662159    5644 logs.go:123] Gathering logs for kubelet ...
	I0210 12:23:13.662734    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0210 12:23:13.694653    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694653    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.803865    1505 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.804150    1505 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: I0210 12:21:49.806616    1505 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 kubelet[1505]: E0210 12:21:49.806785    1505 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:49 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:13.694757    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694879    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.694879    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532407    1561 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.694946    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532561    1561 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.694946    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: I0210 12:21:50.532946    1561 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 kubelet[1561]: E0210 12:21:50.533006    1561 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:13.694986    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0210 12:23:13.695054    5644 command_runner.go:130] > Feb 10 12:21:50 multinode-032400 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804000    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.804091    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.695070    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.807532    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.810518    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.831401    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.849603    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0210 12:23:13.695162    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.849766    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0210 12:23:13.695268    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855712    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0210 12:23:13.695268    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.855847    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0210 12:23:13.695343    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857145    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857321    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-032400","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857850    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.857944    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0210 12:23:13.695420    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.858196    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860593    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860751    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860860    1648 kubelet.go:352] "Adding apiserver pod source"
	I0210 12:23:13.695494    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.860954    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0210 12:23:13.695567    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.866997    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0210 12:23:13.695567    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.869638    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.869825    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.872904    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0210 12:23:13.695642    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.873510    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0210 12:23:13.695715    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.885546    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.695715    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.885641    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.695789    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886839    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0210 12:23:13.695789    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.886957    1648 server.go:1287] "Started kubelet"
	I0210 12:23:13.695830    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.895251    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.897245    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.899864    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900113    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0210 12:23:13.695875    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.900986    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0210 12:23:13.695939    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.901519    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0210 12:23:13.696013    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.904529    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.129.181:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-032400.1822d8316b7ef394  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-032400,UID:multinode-032400,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-032400,},FirstTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,LastTimestamp:2025-02-10 12:21:52.886911892 +0000 UTC m=+0.168917533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-0
32400,}"
	I0210 12:23:13.696047    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.918528    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.918989    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-032400\" not found"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.920907    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.932441    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.940004    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="200ms"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943065    1648 factory.go:221] Registration of the systemd container factory successfully
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943251    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.943289    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.954939    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.956281    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.962018    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981120    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981191    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981212    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.981234    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.981274    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: W0210 12:21:52.985240    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: E0210 12:21:52.985423    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986221    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986328    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.986418    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988035    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988140    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988290    1648 policy_none.go:49] "None policy: Start"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988339    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.988429    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0210 12:23:13.696075    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.989333    1648 state_mem.go:75] "Updated machine memory state"
	I0210 12:23:13.696603    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996399    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0210 12:23:13.696603    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996729    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.996761    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:52 multinode-032400 kubelet[1648]: I0210 12:21:52.999441    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0210 12:23:13.696683    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001480    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.001594    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-032400\" not found"
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.010100    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:13.696757    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082130    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2de8e426f22f9496390d2d8a09910a842da6580933349d6688cd4b1320ea550"
	I0210 12:23:13.696831    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082209    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.082229    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26d9e119a02c5d37077ce2b8aaf0eaf39a16e310dfa75b55d4072355af0799f3"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.085961    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a70f430921ec259ed18ded033aa4e0f2018d948e5ebeaaecbd04d96a1cf7a198"
	I0210 12:23:13.696907    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.092339    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33433fbce4800c4588851f91b9c8bbf2f6cb1549a9a6e7003bd3ad9ab95e6c9"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.095136    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.097863    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.696981    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.099090    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.108335    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ccc0a4e7b5c734e34ed5ec3983417c1afdf297c58cd82400d7f9d24b8f82cac"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.127358    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b"
	I0210 12:23:13.697055    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.141735    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="400ms"
	I0210 12:23:13.697129    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.142956    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c55184f16ccb79ec11ca696b1c88e9db9a9568bbeeccb401543d2aabab9daa4"
	I0210 12:23:13.697129    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145714    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697203    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145888    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-flexvolume-dir\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697276    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.145935    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-kubeconfig\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697276    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146017    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697349    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146081    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-certs\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.697349    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146213    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a56aca3861e4dd5038c600d32d99becd-etcd-data\") pod \"etcd-multinode-032400\" (UID: \"a56aca3861e4dd5038c600d32d99becd\") " pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.697422    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146299    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-ca-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697422    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146332    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8fa4178fdde9f0146fd2a294125bbe5-k8s-certs\") pod \"kube-apiserver-multinode-032400\" (UID: \"a8fa4178fdde9f0146fd2a294125bbe5\") " pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.697495    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146395    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e23fa9a4a53da4e595583d7b35b39311-kubeconfig\") pod \"kube-scheduler-multinode-032400\" (UID: \"e23fa9a4a53da4e595583d7b35b39311\") " pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.697495    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146480    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-ca-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.146687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b073beca2a0e25ad8459b3107e863e7-k8s-certs\") pod \"kube-controller-manager-multinode-032400\" (UID: \"0b073beca2a0e25ad8459b3107e863e7\") " pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.162937    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee16b295f58db486a506e81b42b011f8d6d50d2a52f1bea55481552cfb51c94e"
	I0210 12:23:13.697568    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.165529    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697642    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.167432    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697642    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.168502    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.697715    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.301329    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.697715    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.303037    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.544572    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="800ms"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: I0210 12:21:53.704678    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.697789    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.705877    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.697862    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.746812    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.697862    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.747029    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.697935    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: W0210 12:21:53.867058    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.697935    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 kubelet[1648]: E0210 12:21:53.867234    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-032400&limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698008    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.165583    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.698008    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.165709    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698082    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.346089    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-032400?timeout=10s\": dial tcp 172.29.129.181:8443: connect: connection refused" interval="1.6s"
	I0210 12:23:13.698082    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: I0210 12:21:54.507569    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509216    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.129.181:8443: connect: connection refused" node="multinode-032400"
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: W0210 12:21:54.509373    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.129.181:8443: connect: connection refused
	I0210 12:23:13.698155    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.509471    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.129.181:8443: connect: connection refused" logger="UnhandledError"
	I0210 12:23:13.698240    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 kubelet[1648]: E0210 12:21:54.618443    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698314    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.643834    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698346    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.653673    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.663228    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:55 multinode-032400 kubelet[1648]: E0210 12:21:55.676257    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: I0210 12:21:56.111234    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686207    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.686620    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.689831    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:56 multinode-032400 kubelet[1648]: E0210 12:21:56.690227    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.703954    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:57 multinode-032400 kubelet[1648]: E0210 12:21:57.704934    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-032400\" not found" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.221288    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.248691    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-032400\" already exists" pod="kube-system/kube-scheduler-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.248734    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.268853    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-032400\" already exists" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.268905    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.294680    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-032400\" already exists" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.294713    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.310526    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-032400\" already exists" pod="kube-system/kube-controller-manager-multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310792    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.310970    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-032400"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.311192    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.312560    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.314869    1648 setters.go:602] "Node became not ready" node="multinode-032400" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-10T12:21:58Z","lastTransitionTime":"2025-02-10T12:21:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0210 12:23:13.698375    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.886082    1648 apiserver.go:52] "Watching apiserver"
	I0210 12:23:13.698903    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.891928    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:13.698903    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.892432    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:13.698948    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.894995    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.698984    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: E0210 12:21:58.896093    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.699023    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.922102    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0210 12:23:13.699057    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923504    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.699094    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.923547    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-032400"
	I0210 12:23:13.699094    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964092    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.699129    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.964319    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-032400"
	I0210 12:23:13.699167    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.992108    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9460e1ac793566f90a359ec3476894" path="/var/lib/kubelet/pods/3d9460e1ac793566f90a359ec3476894/volumes"
	I0210 12:23:13.699200    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 kubelet[1648]: I0210 12:21:58.994546    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77dd7f51968a92a0d804d49c0a3127ad" path="/var/lib/kubelet/pods/77dd7f51968a92a0d804d49c0a3127ad/volumes"
	I0210 12:23:13.699238    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.015977    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0-tmp\") pod \"storage-provisioner\" (UID: \"c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0\") " pod="kube-system/storage-provisioner"
	I0210 12:23:13.699271    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016010    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-lib-modules\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:13.699309    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016032    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-cni-cfg\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699350    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016093    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad7f281-f022-4f3b-b206-39ce42713cf9-xtables-lock\") pod \"kube-proxy-rrh82\" (UID: \"9ad7f281-f022-4f3b-b206-39ce42713cf9\") " pod="kube-system/kube-proxy-rrh82"
	I0210 12:23:13.699388    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016112    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-xtables-lock\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699422    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.016275    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09de881b-fbc4-4a8f-b8d7-c46dd3f010ad-lib-modules\") pod \"kindnet-c2mb8\" (UID: \"09de881b-fbc4-4a8f-b8d7-c46dd3f010ad\") " pod="kube-system/kindnet-c2mb8"
	I0210 12:23:13.699460    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016537    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699537    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.016667    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.516646386 +0000 UTC m=+6.798651927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699610    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.031609    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-032400" podStartSLOduration=1.031591606 podStartE2EDuration="1.031591606s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.030067233 +0000 UTC m=+6.312072774" watchObservedRunningTime="2025-02-10 12:21:59.031591606 +0000 UTC m=+6.313597247"
	I0210 12:23:13.699610    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.032295    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-032400" podStartSLOduration=1.032275839 podStartE2EDuration="1.032275839s" podCreationTimestamp="2025-02-10 12:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-10 12:21:59.012105568 +0000 UTC m=+6.294111109" watchObservedRunningTime="2025-02-10 12:21:59.032275839 +0000 UTC m=+6.314281380"
	I0210 12:23:13.699693    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.063318    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0210 12:23:13.699693    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699748    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095402    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699789    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.095525    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:21:59.595504083 +0000 UTC m=+6.877509724 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.520926    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.521021    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.521001667 +0000 UTC m=+7.803007208 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622412    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622461    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: E0210 12:21:59.622532    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:00.622511154 +0000 UTC m=+7.904516695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 kubelet[1648]: I0210 12:21:59.790385    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.168710    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246436    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-032400" podUID="7a35472d-d7c0-4c7d-a5b1-e094370af1c2"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: I0210 12:22:00.246743    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-032400" podUID="34db146c-e09d-4959-8325-d4453dfcfd62"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528505    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.528588    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.528571773 +0000 UTC m=+9.810577314 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629777    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629830    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.629883    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:02.629867049 +0000 UTC m=+9.911872690 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983374    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 kubelet[1648]: E0210 12:22:00.983940    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.699822    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548061    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.700352    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.548594    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.548573918 +0000 UTC m=+13.830579559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.700396    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.648988    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700439    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649225    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.649292    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:06.649274266 +0000 UTC m=+13.931279907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.982600    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:02 multinode-032400 kubelet[1648]: E0210 12:22:02.985279    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:03 multinode-032400 kubelet[1648]: E0210 12:22:03.006185    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.982807    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:04 multinode-032400 kubelet[1648]: E0210 12:22:04.983881    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583411    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.583571    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.583553968 +0000 UTC m=+21.865559509 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684079    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684426    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.684521    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:14.684501328 +0000 UTC m=+21.966506969 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982543    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:06 multinode-032400 kubelet[1648]: E0210 12:22:06.982901    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.007915    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.700471    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.983481    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.700996    5644 command_runner.go:130] > Feb 10 12:22:08 multinode-032400 kubelet[1648]: E0210 12:22:08.987585    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701035    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.981696    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:10 multinode-032400 kubelet[1648]: E0210 12:22:10.982314    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.982627    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:12 multinode-032400 kubelet[1648]: E0210 12:22:12.983351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:13 multinode-032400 kubelet[1648]: E0210 12:22:13.008828    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650628    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.650742    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.650723092 +0000 UTC m=+37.932728733 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751367    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751417    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.751468    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:22:30.751452188 +0000 UTC m=+38.033457729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.983588    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:14 multinode-032400 kubelet[1648]: E0210 12:22:14.984681    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.982654    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:16 multinode-032400 kubelet[1648]: E0210 12:22:16.983601    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.010464    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701071    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983251    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:18 multinode-032400 kubelet[1648]: E0210 12:22:18.983452    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982442    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:20 multinode-032400 kubelet[1648]: E0210 12:22:20.982861    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.981966    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:22 multinode-032400 kubelet[1648]: E0210 12:22:22.982555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:23 multinode-032400 kubelet[1648]: E0210 12:22:23.011880    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.982707    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:24 multinode-032400 kubelet[1648]: E0210 12:22:24.983675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.983236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:26 multinode-032400 kubelet[1648]: E0210 12:22:26.984691    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.013741    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.989948    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:28 multinode-032400 kubelet[1648]: E0210 12:22:28.990610    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698791    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0210 12:23:13.701670    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.698861    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume podName:e45a37bf-e7da-4129-bb7e-8be7dbe93e09 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.698844474 +0000 UTC m=+69.980850115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e45a37bf-e7da-4129-bb7e-8be7dbe93e09-config-volume") pod "coredns-668d6bf9bc-w8rr9" (UID: "e45a37bf-e7da-4129-bb7e-8be7dbe93e09") : object "kube-system"/"coredns" not registered
	I0210 12:23:13.702194    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702194    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799302    1648 projected.go:194] Error preparing data for projected volume kube-api-access-76fn6 for pod default/busybox-58667487b6-8shfg: object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.799372    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6 podName:a3e86dc5-0523-4852-af77-3145d44eaa15 nodeName:}" failed. No retries permitted until 2025-02-10 12:23:02.799354561 +0000 UTC m=+70.081360102 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-76fn6" (UniqueName: "kubernetes.io/projected/a3e86dc5-0523-4852-af77-3145d44eaa15-kube-api-access-76fn6") pod "busybox-58667487b6-8shfg" (UID: "a3e86dc5-0523-4852-af77-3145d44eaa15") : object "default"/"kube-root-ca.crt" not registered
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983005    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 kubelet[1648]: E0210 12:22:30.983695    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.703771    1648 scope.go:117] "RemoveContainer" containerID="182c8395f5e1754689bcf73e94e561717c684af55894a2bd4cbd9d5e8d3dff12"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: I0210 12:22:31.704207    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:31 multinode-032400 kubelet[1648]: E0210 12:22:31.704351    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0)\"" pod="kube-system/storage-provisioner" podUID="c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.981673    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:32 multinode-032400 kubelet[1648]: E0210 12:22:32.982991    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:33 multinode-032400 kubelet[1648]: E0210 12:22:33.015385    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989854    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:34 multinode-032400 kubelet[1648]: E0210 12:22:34.989994    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982057    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:36 multinode-032400 kubelet[1648]: E0210 12:22:36.982423    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.016614    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982466    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702225    5644 command_runner.go:130] > Feb 10 12:22:38 multinode-032400 kubelet[1648]: E0210 12:22:38.982828    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702789    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.981790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:40 multinode-032400 kubelet[1648]: E0210 12:22:40.986032    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:42 multinode-032400 kubelet[1648]: E0210 12:22:42.983608    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: E0210 12:22:43.017646    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:43 multinode-032400 kubelet[1648]: I0210 12:22:43.982665    1648 scope.go:117] "RemoveContainer" containerID="e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.981714    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 kubelet[1648]: E0210 12:22:44.982071    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	I0210 12:23:13.702806    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	I0210 12:23:13.751690    5644 logs.go:123] Gathering logs for dmesg ...
	I0210 12:23:13.752317    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0210 12:23:13.776812    5644 command_runner.go:130] > [Feb10 12:20] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.108726] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0210 12:23:13.777045    5644 command_runner.go:130] > [  +0.024202] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0210 12:23:13.777168    5644 command_runner.go:130] > [  +0.000005] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0210 12:23:13.777209    5644 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0210 12:23:13.777209    5644 command_runner.go:130] > [  +0.062099] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.027667] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0210 12:23:13.777245    5644 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0210 12:23:13.777245    5644 command_runner.go:130] > [Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	I0210 12:23:13.777245    5644 command_runner.go:130] > [Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	I0210 12:23:13.777245    5644 command_runner.go:130] > [ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	I0210 12:23:13.779709    5644 logs.go:123] Gathering logs for kube-apiserver [f368bd876774] ...
	I0210 12:23:13.779709    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f368bd876774"
	I0210 12:23:13.813408    5644 command_runner.go:130] ! W0210 12:21:55.142359       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0210 12:23:13.813481    5644 command_runner.go:130] ! I0210 12:21:55.145301       1 options.go:238] external host was not specified, using 172.29.129.181
	I0210 12:23:13.813481    5644 command_runner.go:130] ! I0210 12:21:55.152669       1 server.go:143] Version: v1.32.1
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:55.155205       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:56.105409       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0210 12:23:13.813517    5644 command_runner.go:130] ! I0210 12:21:56.132590       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:13.813564    5644 command_runner.go:130] ! I0210 12:21:56.143671       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.143842       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.149478       1 instance.go:233] Using reconciler: lease
	I0210 12:23:13.813606    5644 command_runner.go:130] ! I0210 12:21:56.242968       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0210 12:23:13.813679    5644 command_runner.go:130] ! W0210 12:21:56.243233       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813679    5644 command_runner.go:130] ! I0210 12:21:56.576352       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0210 12:23:13.813679    5644 command_runner.go:130] ! I0210 12:21:56.576865       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:56.980973       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:57.288861       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0210 12:23:13.813723    5644 command_runner.go:130] ! I0210 12:21:57.344145       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0210 12:23:13.813723    5644 command_runner.go:130] ! W0210 12:21:57.344213       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813787    5644 command_runner.go:130] ! W0210 12:21:57.344222       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.813787    5644 command_runner.go:130] ! I0210 12:21:57.345004       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0210 12:23:13.813787    5644 command_runner.go:130] ! W0210 12:21:57.345107       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.346842       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.348477       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0210 12:23:13.813833    5644 command_runner.go:130] ! W0210 12:21:57.349989       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! W0210 12:21:57.349999       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0210 12:23:13.813833    5644 command_runner.go:130] ! I0210 12:21:57.351719       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0210 12:23:13.813901    5644 command_runner.go:130] ! W0210 12:21:57.351750       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0210 12:23:13.813901    5644 command_runner.go:130] ! I0210 12:21:57.352799       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0210 12:23:13.813946    5644 command_runner.go:130] ! W0210 12:21:57.352837       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.813946    5644 command_runner.go:130] ! W0210 12:21:57.352843       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.813946    5644 command_runner.go:130] ! I0210 12:21:57.353578       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0210 12:23:13.814009    5644 command_runner.go:130] ! W0210 12:21:57.353613       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814009    5644 command_runner.go:130] ! W0210 12:21:57.353620       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0210 12:23:13.814054    5644 command_runner.go:130] ! I0210 12:21:57.354314       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0210 12:23:13.814054    5644 command_runner.go:130] ! W0210 12:21:57.354346       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814054    5644 command_runner.go:130] ! I0210 12:21:57.356000       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0210 12:23:13.814109    5644 command_runner.go:130] ! W0210 12:21:57.356105       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814109    5644 command_runner.go:130] ! W0210 12:21:57.356115       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814144    5644 command_runner.go:130] ! I0210 12:21:57.356604       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0210 12:23:13.814144    5644 command_runner.go:130] ! W0210 12:21:57.356637       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814179    5644 command_runner.go:130] ! W0210 12:21:57.356644       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814179    5644 command_runner.go:130] ! I0210 12:21:57.357607       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0210 12:23:13.814232    5644 command_runner.go:130] ! W0210 12:21:57.357643       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0210 12:23:13.814232    5644 command_runner.go:130] ! I0210 12:21:57.359912       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.359944       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.359952       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814263    5644 command_runner.go:130] ! I0210 12:21:57.360554       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0210 12:23:13.814263    5644 command_runner.go:130] ! W0210 12:21:57.360628       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814324    5644 command_runner.go:130] ! W0210 12:21:57.360635       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814324    5644 command_runner.go:130] ! I0210 12:21:57.363612       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0210 12:23:13.814324    5644 command_runner.go:130] ! W0210 12:21:57.363646       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814369    5644 command_runner.go:130] ! W0210 12:21:57.363653       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814369    5644 command_runner.go:130] ! I0210 12:21:57.365567       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0210 12:23:13.814369    5644 command_runner.go:130] ! W0210 12:21:57.365626       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0210 12:23:13.814431    5644 command_runner.go:130] ! W0210 12:21:57.365637       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0210 12:23:13.814431    5644 command_runner.go:130] ! W0210 12:21:57.365642       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! I0210 12:21:57.371693       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.371726       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.371732       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0210 12:23:13.814475    5644 command_runner.go:130] ! I0210 12:21:57.374238       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0210 12:23:13.814475    5644 command_runner.go:130] ! W0210 12:21:57.374275       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814542    5644 command_runner.go:130] ! W0210 12:21:57.374303       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0210 12:23:13.814542    5644 command_runner.go:130] ! I0210 12:21:57.375143       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0210 12:23:13.814542    5644 command_runner.go:130] ! W0210 12:21:57.375210       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814591    5644 command_runner.go:130] ! I0210 12:21:57.389235       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0210 12:23:13.814591    5644 command_runner.go:130] ! W0210 12:21:57.389296       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0210 12:23:13.814634    5644 command_runner.go:130] ! I0210 12:21:58.039635       1 secure_serving.go:213] Serving securely on [::]:8443
	I0210 12:23:13.814634    5644 command_runner.go:130] ! I0210 12:21:58.039773       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.040121       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.040710       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.814670    5644 command_runner.go:130] ! I0210 12:21:58.048362       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.814739    5644 command_runner.go:130] ! I0210 12:21:58.048918       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0210 12:23:13.814739    5644 command_runner.go:130] ! I0210 12:21:58.049825       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.049971       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052014       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052237       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052355       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0210 12:23:13.814782    5644 command_runner.go:130] ! I0210 12:21:58.052595       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.052911       1 controller.go:78] Starting OpenAPI AggregationController
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.053131       1 controller.go:119] Starting legacy_token_tracking_controller
	I0210 12:23:13.814849    5644 command_runner.go:130] ! I0210 12:21:58.053221       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053335       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053483       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053515       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0210 12:23:13.814895    5644 command_runner.go:130] ! I0210 12:21:58.053696       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0210 12:23:13.814959    5644 command_runner.go:130] ! I0210 12:21:58.054087       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054528       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054570       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.054742       1 aggregator.go:169] waiting for initial CRD sync...
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.055217       1 controller.go:142] Starting OpenAPI controller
	I0210 12:23:13.815003    5644 command_runner.go:130] ! I0210 12:21:58.055546       1 controller.go:90] Starting OpenAPI V3 controller
	I0210 12:23:13.815072    5644 command_runner.go:130] ! I0210 12:21:58.055757       1 naming_controller.go:294] Starting NamingConditionController
	I0210 12:23:13.815072    5644 command_runner.go:130] ! I0210 12:21:58.056074       1 establishing_controller.go:81] Starting EstablishingController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056264       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056315       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056330       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0210 12:23:13.815117    5644 command_runner.go:130] ! I0210 12:21:58.056364       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.815169    5644 command_runner.go:130] ! I0210 12:21:58.056531       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.815169    5644 command_runner.go:130] ! I0210 12:21:58.082011       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.082050       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.191638       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.191858       1 policy_source.go:240] refreshing policies
	I0210 12:23:13.815218    5644 command_runner.go:130] ! I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:23:13.815280    5644 command_runner.go:130] ! I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:23:13.815325    5644 command_runner.go:130] ! I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:23:13.815381    5644 command_runner.go:130] ! I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:23:13.815424    5644 command_runner.go:130] ! I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:23:13.815483    5644 command_runner.go:130] ! I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0210 12:23:13.815527    5644 command_runner.go:130] ! W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:23:13.815527    5644 command_runner.go:130] ! I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:23:13.815586    5644 command_runner.go:130] ! I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:23:13.815631    5644 command_runner.go:130] ! I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:23:13.815631    5644 command_runner.go:130] ! I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0210 12:23:13.827876    5644 logs.go:123] Gathering logs for kube-scheduler [adf520f9b9d7] ...
	I0210 12:23:13.827876    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 adf520f9b9d7"
	I0210 12:23:13.856409    5644 command_runner.go:130] ! I0210 11:59:00.019140       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:13.856460    5644 command_runner.go:130] ! W0210 11:59:02.451878       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:13.856499    5644 command_runner.go:130] ! W0210 11:59:02.452178       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.856499    5644 command_runner.go:130] ! W0210 11:59:02.452350       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:13.856565    5644 command_runner.go:130] ! W0210 11:59:02.452478       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:13.856603    5644 command_runner.go:130] ! I0210 11:59:02.632458       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:13.856603    5644 command_runner.go:130] ! I0210 11:59:02.632517       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.856654    5644 command_runner.go:130] ! I0210 11:59:02.686485       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:13.856654    5644 command_runner.go:130] ! I0210 11:59:02.686744       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:13.856700    5644 command_runner.go:130] ! I0210 11:59:02.689142       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:13.856700    5644 command_runner.go:130] ! I0210 11:59:02.708240       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.856742    5644 command_runner.go:130] ! W0210 11:59:02.715958       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:13.856787    5644 command_runner.go:130] ! W0210 11:59:02.751571       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.856835    5644 command_runner.go:130] ! E0210 11:59:02.751658       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:13.856881    5644 command_runner.go:130] ! E0210 11:59:02.717894       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.856928    5644 command_runner.go:130] ! W0210 11:59:02.766153       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:13.856973    5644 command_runner.go:130] ! E0210 11:59:02.768039       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857021    5644 command_runner.go:130] ! W0210 11:59:02.768257       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.857066    5644 command_runner.go:130] ! E0210 11:59:02.768346       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857113    5644 command_runner.go:130] ! W0210 11:59:02.766789       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.857167    5644 command_runner.go:130] ! E0210 11:59:02.768584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857216    5644 command_runner.go:130] ! W0210 11:59:02.766885       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857216    5644 command_runner.go:130] ! E0210 11:59:02.768838       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857310    5644 command_runner.go:130] ! W0210 11:59:02.769507       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857355    5644 command_runner.go:130] ! E0210 11:59:02.778960       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857404    5644 command_runner.go:130] ! W0210 11:59:02.769773       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:13.857404    5644 command_runner.go:130] ! E0210 11:59:02.779013       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857500    5644 command_runner.go:130] ! W0210 11:59:02.767082       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:13.857532    5644 command_runner.go:130] ! E0210 11:59:02.779037       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857581    5644 command_runner.go:130] ! W0210 11:59:02.767143       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857626    5644 command_runner.go:130] ! E0210 11:59:02.779057       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857668    5644 command_runner.go:130] ! W0210 11:59:02.767174       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.857714    5644 command_runner.go:130] ! E0210 11:59:02.779079       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857762    5644 command_runner.go:130] ! W0210 11:59:02.767205       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.857807    5644 command_runner.go:130] ! E0210 11:59:02.779095       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857855    5644 command_runner.go:130] ! W0210 11:59:02.767318       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.857900    5644 command_runner.go:130] ! E0210 11:59:02.779525       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.857948    5644 command_runner.go:130] ! W0210 11:59:02.769947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:13.857948    5644 command_runner.go:130] ! E0210 11:59:02.779843       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858035    5644 command_runner.go:130] ! W0210 11:59:02.769992       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.858080    5644 command_runner.go:130] ! E0210 11:59:02.779885       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858129    5644 command_runner.go:130] ! W0210 11:59:02.767047       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.858129    5644 command_runner.go:130] ! E0210 11:59:02.779962       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858175    5644 command_runner.go:130] ! W0210 11:59:03.612263       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.858267    5644 command_runner.go:130] ! E0210 11:59:03.612405       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858315    5644 command_runner.go:130] ! W0210 11:59:03.698062       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.858315    5644 command_runner.go:130] ! E0210 11:59:03.698491       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858361    5644 command_runner.go:130] ! W0210 11:59:03.766764       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0210 12:23:13.858453    5644 command_runner.go:130] ! E0210 11:59:03.767296       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858453    5644 command_runner.go:130] ! W0210 11:59:03.769299       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0210 12:23:13.858500    5644 command_runner.go:130] ! E0210 11:59:03.769340       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858547    5644 command_runner.go:130] ! W0210 11:59:03.811212       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.858593    5644 command_runner.go:130] ! E0210 11:59:03.811686       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858638    5644 command_runner.go:130] ! W0210 11:59:03.864096       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0210 12:23:13.858686    5644 command_runner.go:130] ! E0210 11:59:03.864216       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858731    5644 command_runner.go:130] ! W0210 11:59:03.954246       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:13.858773    5644 command_runner.go:130] ! E0210 11:59:03.955266       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0210 12:23:13.858817    5644 command_runner.go:130] ! W0210 11:59:03.968978       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0210 12:23:13.858911    5644 command_runner.go:130] ! E0210 11:59:03.969083       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.858952    5644 command_runner.go:130] ! W0210 11:59:04.075142       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.858997    5644 command_runner.go:130] ! E0210 11:59:04.075319       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859089    5644 command_runner.go:130] ! W0210 11:59:04.157608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859131    5644 command_runner.go:130] ! E0210 11:59:04.157748       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859189    5644 command_runner.go:130] ! W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859233    5644 command_runner.go:130] ! E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859274    5644 command_runner.go:130] ! W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859274    5644 command_runner.go:130] ! E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859317    5644 command_runner.go:130] ! W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859358    5644 command_runner.go:130] ! E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859400    5644 command_runner.go:130] ! W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0210 12:23:13.859440    5644 command_runner.go:130] ! E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859483    5644 command_runner.go:130] ! W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.859523    5644 command_runner.go:130] ! E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859565    5644 command_runner.go:130] ! W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0210 12:23:13.859565    5644 command_runner.go:130] ! E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859606    5644 command_runner.go:130] ! W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0210 12:23:13.859650    5644 command_runner.go:130] ! E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859685    5644 command_runner.go:130] ! W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0210 12:23:13.859714    5644 command_runner.go:130] ! E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	I0210 12:23:13.874894    5644 logs.go:123] Gathering logs for coredns [c5b854dbb912] ...
	I0210 12:23:13.874894    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5b854dbb912"
	I0210 12:23:13.904521    5644 command_runner.go:130] > .:53
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:13.904521    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:13.904521    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 127.0.0.1:57159 - 43532 "HINFO IN 6094843902663837130.722983224060727812. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.056926603s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:54851 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000385004s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36917 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.071166415s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:35134 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.03235507s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37507 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.161129695s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:55555 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000265804s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:44984 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000263303s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33618 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000192703s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33701 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000137201s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:48882 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000140601s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:59416 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.037067822s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37164 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000261703s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:47541 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000172402s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:46192 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033005976s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:33821 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127301s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:35703 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000116001s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:37192 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000173702s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:49175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000188802s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	I0210 12:23:13.904521    5644 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0210 12:23:13.905577    5644 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0210 12:23:13.908320    5644 logs.go:123] Gathering logs for kindnet [4439940fa5f4] ...
	I0210 12:23:13.908320    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4439940fa5f4"
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445716       1 main.go:301] handling current node
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445736       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.445743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.446276       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:30.446402       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945361    5644 command_runner.go:130] ! I0210 12:08:40.446484       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.446649       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447051       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447089       1 main.go:301] handling current node
	I0210 12:23:13.945492    5644 command_runner.go:130] ! I0210 12:08:40.447173       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945542    5644 command_runner.go:130] ! I0210 12:08:40.447202       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.445921       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.445988       1 main.go:301] handling current node
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446008       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446015       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446206       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945574    5644 command_runner.go:130] ! I0210 12:08:50.446217       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446480       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446617       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945664    5644 command_runner.go:130] ! I0210 12:09:00.446931       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945706    5644 command_runner.go:130] ! I0210 12:09:00.446947       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945706    5644 command_runner.go:130] ! I0210 12:09:00.447078       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945762    5644 command_runner.go:130] ! I0210 12:09:00.447087       1 main.go:301] handling current node
	I0210 12:23:13.945762    5644 command_runner.go:130] ! I0210 12:09:10.445597       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445645       1 main.go:301] handling current node
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445665       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.445671       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.446612       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:10.447083       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945835    5644 command_runner.go:130] ! I0210 12:09:20.451891       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.451928       1 main.go:301] handling current node
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452043       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452054       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452219       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:20.452226       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445685       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445780       1 main.go:301] handling current node
	I0210 12:23:13.945952    5644 command_runner.go:130] ! I0210 12:09:30.445924       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.445945       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.446110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:30.446136       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446044       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446146       1 main.go:301] handling current node
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446259       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446288       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946081    5644 command_runner.go:130] ! I0210 12:09:40.446677       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:40.446692       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.449867       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.449979       1 main.go:301] handling current node
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450078       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450121       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450322       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946210    5644 command_runner.go:130] ! I0210 12:09:50.450372       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.446642       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.446769       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447234       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447254       1 main.go:301] handling current node
	I0210 12:23:13.946352    5644 command_runner.go:130] ! I0210 12:10:00.447269       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:00.447275       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.445515       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.445682       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946442    5644 command_runner.go:130] ! I0210 12:10:10.446223       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:10.446709       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:10.447034       1 main.go:301] handling current node
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446409       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446529       1 main.go:301] handling current node
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446553       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946528    5644 command_runner.go:130] ! I0210 12:10:20.446563       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:20.446763       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:20.446790       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446373       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446482       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446672       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446700       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946614    5644 command_runner.go:130] ! I0210 12:10:30.446792       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:30.447014       1 main.go:301] handling current node
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454509       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454636       1 main.go:301] handling current node
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454674       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.454863       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.455160       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:40.455261       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946739    5644 command_runner.go:130] ! I0210 12:10:50.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449355       1 main.go:301] handling current node
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.449538       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.450354       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:10:50.450448       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.445904       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.446062       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.946870    5644 command_runner.go:130] ! I0210 12:11:00.446602       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446700       1 main.go:301] handling current node
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446821       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:00.446837       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.946995    5644 command_runner.go:130] ! I0210 12:11:10.453595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453634       1 main.go:301] handling current node
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.453660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.454135       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947083    5644 command_runner.go:130] ! I0210 12:11:10.454241       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.446533       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.446903       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447462       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447548       1 main.go:301] handling current node
	I0210 12:23:13.947167    5644 command_runner.go:130] ! I0210 12:11:20.447565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:20.447572       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445620       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445748       1 main.go:301] handling current node
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445870       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947251    5644 command_runner.go:130] ! I0210 12:11:30.445907       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:30.446320       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:30.446414       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446346       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446417       1 main.go:301] handling current node
	I0210 12:23:13.947336    5644 command_runner.go:130] ! I0210 12:11:40.446436       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446443       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446780       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:40.446846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:50.447155       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947420    5644 command_runner.go:130] ! I0210 12:11:50.447207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447699       1 main.go:301] handling current node
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447842       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947505    5644 command_runner.go:130] ! I0210 12:11:50.447929       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.449885       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450002       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450294       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450490       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450618       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947592    5644 command_runner.go:130] ! I0210 12:12:00.450627       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449160       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449228       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449260       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449282       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449463       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:10.449474       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447518       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447655       1 main.go:301] handling current node
	I0210 12:23:13.947705    5644 command_runner.go:130] ! I0210 12:12:20.447676       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.447684       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.448046       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:20.448157       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:30.446585       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.947840    5644 command_runner.go:130] ! I0210 12:12:30.446758       1 main.go:301] handling current node
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.446779       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.446786       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.447218       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:30.447298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.947932    5644 command_runner.go:130] ! I0210 12:12:40.445769       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.445848       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446043       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446125       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948018    5644 command_runner.go:130] ! I0210 12:12:40.446266       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:40.446279       1 main.go:301] handling current node
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446416       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446515       1 main.go:301] handling current node
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446540       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948102    5644 command_runner.go:130] ! I0210 12:12:50.446549       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:12:50.447110       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:12:50.447222       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445595       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445741       1 main.go:301] handling current node
	I0210 12:23:13.948188    5644 command_runner.go:130] ! I0210 12:13:00.445762       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.445770       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.446069       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:00.446101       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948272    5644 command_runner.go:130] ! I0210 12:13:10.454457       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454577       1 main.go:301] handling current node
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454598       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.454605       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.455246       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948358    5644 command_runner.go:130] ! I0210 12:13:10.455360       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.446944       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447287       1 main.go:301] handling current node
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447395       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447410       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948442    5644 command_runner.go:130] ! I0210 12:13:20.447940       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:20.448031       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446279       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446594       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.446926       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948530    5644 command_runner.go:130] ! I0210 12:13:30.447035       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:30.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:30.447310       1 main.go:301] handling current node
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.446967       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.447352       1 main.go:301] handling current node
	I0210 12:23:13.948615    5644 command_runner.go:130] ! I0210 12:13:40.447404       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.447743       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.448142       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:40.448255       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446777       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446915       1 main.go:301] handling current node
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.446936       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948739    5644 command_runner.go:130] ! I0210 12:13:50.447424       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:13:50.447787       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:13:50.447846       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446345       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446447       1 main.go:301] handling current node
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446468       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.446475       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.447158       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948868    5644 command_runner.go:130] ! I0210 12:14:00.447251       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454046       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454150       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454908       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.454981       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.948988    5644 command_runner.go:130] ! I0210 12:14:10.455630       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:10.455665       1 main.go:301] handling current node
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447582       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447632       1 main.go:301] handling current node
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447652       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949076    5644 command_runner.go:130] ! I0210 12:14:20.447660       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:20.447892       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:20.447961       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445562       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445636       1 main.go:301] handling current node
	I0210 12:23:13.949164    5644 command_runner.go:130] ! I0210 12:14:30.445655       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.445665       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.446340       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:30.446436       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:40.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949249    5644 command_runner.go:130] ! I0210 12:14:40.445963       1 main.go:301] handling current node
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446050       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446062       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:40.446298       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949334    5644 command_runner.go:130] ! I0210 12:14:50.446519       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446627       1 main.go:301] handling current node
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446648       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.446655       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.447165       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949418    5644 command_runner.go:130] ! I0210 12:14:50.447285       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452587       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452709       1 main.go:301] handling current node
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452728       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452735       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452961       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949503    5644 command_runner.go:130] ! I0210 12:15:00.452989       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.453753       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.453980       1 main.go:301] handling current node
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.455477       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.455590       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.456459       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:10.456484       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949588    5644 command_runner.go:130] ! I0210 12:15:20.445894       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446019       1 main.go:301] handling current node
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446055       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446076       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446274       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:20.446363       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446394       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446444       1 main.go:301] handling current node
	I0210 12:23:13.949702    5644 command_runner.go:130] ! I0210 12:15:30.446463       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446470       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446861       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:30.446930       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.453869       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.454189       1 main.go:301] handling current node
	I0210 12:23:13.949837    5644 command_runner.go:130] ! I0210 12:15:40.454382       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454457       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454869       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:40.454895       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:50.446531       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.949936    5644 command_runner.go:130] ! I0210 12:15:50.446662       1 main.go:301] handling current node
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.446685       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.446693       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.447023       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:15:50.447095       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950010    5644 command_runner.go:130] ! I0210 12:16:00.446838       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447006       1 main.go:301] handling current node
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447108       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.447566       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.448114       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:00.448216       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950083    5644 command_runner.go:130] ! I0210 12:16:10.445857       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445967       1 main.go:301] handling current node
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445988       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.445996       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950155    5644 command_runner.go:130] ! I0210 12:16:10.446184       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:10.446207       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.453730       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.453928       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.454430       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950228    5644 command_runner.go:130] ! I0210 12:16:20.454520       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:20.454929       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:20.454975       1 main.go:301] handling current node
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:30.445927       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950300    5644 command_runner.go:130] ! I0210 12:16:30.446036       1 main.go:301] handling current node
	I0210 12:23:13.950373    5644 command_runner.go:130] ! I0210 12:16:30.446057       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950405    5644 command_runner.go:130] ! I0210 12:16:30.446065       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950405    5644 command_runner.go:130] ! I0210 12:16:30.446315       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:30.446373       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.446863       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.446966       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447288       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447365       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447383       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:40.447389       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447339       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447453       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447476       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.447484       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.448045       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:16:50.448138       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447665       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447898       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.447937       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448013       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448741       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:00.448921       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453664       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453771       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453792       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.453831       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.454596       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:10.454619       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.453960       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454001       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454018       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454024       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454198       1 main.go:297] Handling node with IPs: map[172.29.138.52:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:20.454208       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.2.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445717       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445917       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445940       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:30.445949       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452548       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452740       1 main.go:301] handling current node
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452774       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.452843       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453042       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453135       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:40.453247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.4.0/24 Src: <nil> Gw: 172.29.129.10 Flags: [] Table: 0 Realm: 0} 
	I0210 12:23:13.950430    5644 command_runner.go:130] ! I0210 12:17:50.446275       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.446319       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447189       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447219       1 main.go:301] handling current node
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447234       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:17:50.447365       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.950962    5644 command_runner.go:130] ! I0210 12:18:00.449743       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449961       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449983       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.449993       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.450437       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:00.450512       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.454513       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455074       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455189       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455203       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455514       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:10.455628       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446904       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446944       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446964       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.446971       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.447447       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:20.447539       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445669       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445724       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445744       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.445752       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.446236       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:30.446332       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449074       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449128       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449535       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449551       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449565       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:40.449570       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446047       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446175       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446614       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446823       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.446915       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:23:13.951038    5644 command_runner.go:130] ! I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951562    5644 command_runner.go:130] ! I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:23:13.951635    5644 command_runner.go:130] ! I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:23:13.968176    5644 logs.go:123] Gathering logs for kube-controller-manager [9408ce83d7d3] ...
	I0210 12:23:13.968176    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9408ce83d7d3"
	I0210 12:23:13.997324    5644 command_runner.go:130] ! I0210 11:58:59.087911       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:13.998248    5644 command_runner.go:130] ! I0210 11:59:00.079684       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:13.998248    5644 command_runner.go:130] ! I0210 11:59:00.079828       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082257       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082445       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.082714       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:00.083168       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.525093       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.525455       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550577       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550894       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.550923       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575286       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575386       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575519       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.575529       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608411       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608435       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608574       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.608594       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.626624       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632106       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632319       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.632332       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.694202       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.694994       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.697650       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.765406       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:13.998281    5644 command_runner.go:130] ! I0210 11:59:07.765979       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:13.998812    5644 command_runner.go:130] ! I0210 11:59:07.765997       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:13.998812    5644 command_runner.go:130] ! I0210 11:59:07.782342       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:13.998862    5644 command_runner.go:130] ! I0210 11:59:07.782670       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.782685       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850466       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850651       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850629       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850833       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.850844       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.880892       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.881116       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.881129       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930262       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930372       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.930897       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.945659       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.946579       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.946751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.997690       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.998189       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.997759       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:07.998323       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135040       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135118       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.135130       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.290937       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.291080       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.293569       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.293594       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:13.998901    5644 command_runner.go:130] ! I0210 11:59:08.435030       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:13.999432    5644 command_runner.go:130] ! I0210 11:59:08.435146       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:13.999432    5644 command_runner.go:130] ! I0210 11:59:08.435984       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:13.999493    5644 command_runner.go:130] ! I0210 11:59:08.742172       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:13.999493    5644 command_runner.go:130] ! I0210 11:59:08.742257       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:13.999554    5644 command_runner.go:130] ! I0210 11:59:08.742274       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742293       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742308       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:13.999600    5644 command_runner.go:130] ! I0210 11:59:08.742326       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:13.999659    5644 command_runner.go:130] ! I0210 11:59:08.742346       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:13.999659    5644 command_runner.go:130] ! I0210 11:59:08.742463       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:13.999726    5644 command_runner.go:130] ! I0210 11:59:08.742481       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742527       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742551       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742568       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742584       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:08.742597       1 shared_informer.go:597] resyncPeriod 20h8m15.80202588s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742631       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742652       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742674       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:08.742683       1 shared_informer.go:597] resyncPeriod 18h34m58.865598394s is smaller than resyncCheckPeriod 22h17m54.981661418s and the informer has already started. Changing it to 22h17m54.981661418s
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742710       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742733       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742757       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742786       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.742950       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743011       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743022       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.743050       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.897782       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.898567       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:08.898750       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:13.999752    5644 command_runner.go:130] ! W0210 11:59:09.538965       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:09.557948       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:13.999752    5644 command_runner.go:130] ! I0210 11:59:09.558013       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:14.000283    5644 command_runner.go:130] ! I0210 11:59:09.558024       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:14.000283    5644 command_runner.go:130] ! I0210 11:59:09.558263       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:14.000331    5644 command_runner.go:130] ! I0210 11:59:09.558274       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:14.000358    5644 command_runner.go:130] ! I0210 11:59:09.587543       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:14.000358    5644 command_runner.go:130] ! I0210 11:59:09.587843       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:14.000402    5644 command_runner.go:130] ! I0210 11:59:09.587861       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:14.000448    5644 command_runner.go:130] ! I0210 11:59:09.635254       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:14.000448    5644 command_runner.go:130] ! I0210 11:59:09.635299       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.000487    5644 command_runner.go:130] ! I0210 11:59:09.635329       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.636160       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814593       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814752       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:14.000517    5644 command_runner.go:130] ! I0210 11:59:09.814770       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817088       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.000609    5644 command_runner.go:130] ! I0210 11:59:09.817159       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817166       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817276       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:14.000693    5644 command_runner.go:130] ! I0210 11:59:09.817288       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.000735    5644 command_runner.go:130] ! I0210 11:59:09.817325       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000735    5644 command_runner.go:130] ! I0210 11:59:09.817457       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000806    5644 command_runner.go:130] ! I0210 11:59:09.817598       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000806    5644 command_runner.go:130] ! I0210 11:59:09.817777       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.000863    5644 command_runner.go:130] ! I0210 11:59:09.873976       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:14.000904    5644 command_runner.go:130] ! I0210 11:59:09.874097       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:14.000904    5644 command_runner.go:130] ! I0210 11:59:09.874114       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:14.000939    5644 command_runner.go:130] ! I0210 11:59:10.010350       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010713       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010555       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.010999       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148245       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148336       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.148619       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294135       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294378       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.294395       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.455757       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.456357       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.456388       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.617918       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.618004       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.618017       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630001       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630344       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630739       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.630915       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683156       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683264       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683357       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683709       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.683833       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.764503       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.764626       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893425       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893535       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893547       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:10.893637       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.207689       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.207720       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.208285       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:14.000999    5644 command_runner.go:130] ! I0210 11:59:11.208325       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:14.001528    5644 command_runner.go:130] ! I0210 11:59:11.268236       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:14.001568    5644 command_runner.go:130] ! I0210 11:59:11.268441       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:14.001568    5644 command_runner.go:130] ! I0210 11:59:11.268458       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:14.001609    5644 command_runner.go:130] ! I0210 11:59:11.834451       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.001609    5644 command_runner.go:130] ! I0210 11:59:11.839072       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.839109       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.954065       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:14.001649    5644 command_runner.go:130] ! I0210 11:59:11.954564       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:14.001698    5644 command_runner.go:130] ! I0210 11:59:11.954191       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:14.001698    5644 command_runner.go:130] ! I0210 11:59:11.971728       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.972266       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.972442       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:14.001740    5644 command_runner.go:130] ! I0210 11:59:11.988553       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:14.002032    5644 command_runner.go:130] ! I0210 11:59:11.989935       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:14.002071    5644 command_runner.go:130] ! I0210 11:59:11.990037       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:14.002112    5644 command_runner.go:130] ! I0210 11:59:12.002658       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.002112    5644 command_runner.go:130] ! I0210 11:59:12.026212       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.053411       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.059575       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.059677       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060669       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060694       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.060736       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.075788       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.090277       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.093866       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094251       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400" podCIDRs=["10.244.0.0/24"]
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094445       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094647       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.094787       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.098777       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.099001       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.099016       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.103407       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.108852       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.108917       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.111199       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.115876       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117732       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117858       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.117925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.118059       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.127026       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132202       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132293       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.132357       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136457       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136477       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.136864       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.137022       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.137034       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.140123       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:14.002152    5644 command_runner.go:130] ! I0210 11:59:12.143611       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.146959       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.149917       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:14.002679    5644 command_runner.go:130] ! I0210 11:59:12.151583       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:14.002721    5644 command_runner.go:130] ! I0210 11:59:12.151756       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:14.002721    5644 command_runner.go:130] ! I0210 11:59:12.155408       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.156838       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.166263       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:14.002756    5644 command_runner.go:130] ! I0210 11:59:12.169607       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.173266       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.183228       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.183461       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.184165       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.184514       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.185265       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.186883       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.189882       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:12.964659       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.306836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="1.342470129s"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.421918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="114.771421ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.422243       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.5µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:14.423300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="45.7µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.150166       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="328.244339ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.175057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="24.827249ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:15.175285       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="52.7µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:38.469109       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.029106       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.056002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:41.223446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="83.5µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:42.192695       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:43.176439       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="220.4µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:45.142362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="156.401µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:46.978311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.784549ms"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 11:59:46.978923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="160.001µs"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.351602       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.372872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.372982       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.373016       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.386686       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.002795    5644 command_runner.go:130] ! I0210 12:02:24.500791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:25.042334       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:27.223123       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:14.003321    5644 command_runner.go:130] ! I0210 12:02:27.269202       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003399    5644 command_runner.go:130] ! I0210 12:02:34.686149       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:55.584256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.900478       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.901463       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003480    5644 command_runner.go:130] ! I0210 12:02:58.923096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:02.254436       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:23.965178       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="118.072754ms"
	I0210 12:23:14.003725    5644 command_runner.go:130] ! I0210 12:03:23.989777       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="24.546974ms"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:23.990705       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.6µs"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:23.998308       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="47.1µs"
	I0210 12:23:14.003817    5644 command_runner.go:130] ! I0210 12:03:26.707538       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.348343ms"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:26.708146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="79.501µs"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:27.077137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.878814ms"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:27.077509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="53.8µs"
	I0210 12:23:14.003905    5644 command_runner.go:130] ! I0210 12:03:43.387780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:03:56.589176       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:07:05.733007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.003987    5644 command_runner.go:130] ! I0210 12:07:05.733621       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004063    5644 command_runner.go:130] ! I0210 12:07:05.776872       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.2.0/24"]
	I0210 12:23:14.004063    5644 command_runner.go:130] ! I0210 12:07:05.777009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004129    5644 command_runner.go:130] ! E0210 12:07:05.833973       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.3.0/24"]
	I0210 12:23:14.004158    5644 command_runner.go:130] ! E0210 12:07:05.834115       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-032400-m03"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! E0210 12:07:05.834184       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-032400-m03': failed to patch node CIDR: Node \"multinode-032400-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.3.0/24\", \"10.244.2.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! I0210 12:07:05.834211       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004194    5644 command_runner.go:130] ! I0210 12:07:05.839673       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:06.048438       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:06.603626       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004285    5644 command_runner.go:130] ! I0210 12:07:07.285160       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:07.401415       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:15.795765       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004355    5644 command_runner.go:130] ! I0210 12:07:34.465645       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:34.466343       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:34.484609       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:36.177851       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004420    5644 command_runner.go:130] ! I0210 12:07:37.325936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:08:11.294432       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:09:09.390735       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:10:40.526492       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004493    5644 command_runner.go:130] ! I0210 12:13:17.755688       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004573    5644 command_runner.go:130] ! I0210 12:14:15.383603       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.429501       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.004600    5644 command_runner.go:130] ! I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.026367    5644 logs.go:123] Gathering logs for container status ...
	I0210 12:23:14.026367    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0210 12:23:14.089377    5644 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0210 12:23:14.089487    5644 command_runner.go:130] > ab1277406daa9       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	I0210 12:23:14.089487    5644 command_runner.go:130] > 9240ce80f94ce       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	I0210 12:23:14.089598    5644 command_runner.go:130] > 59ace13383a7f       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:14.089645    5644 command_runner.go:130] > efc2d4164d811       d300845f67aeb                                                                                         About a minute ago   Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	I0210 12:23:14.089645    5644 command_runner.go:130] > e57ea4c7f300b       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	I0210 12:23:14.089645    5644 command_runner.go:130] > 6640b4e3d696c       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	I0210 12:23:14.089645    5644 command_runner.go:130] > bd1666238ae65       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > f368bd8767741       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 2c0b973818252       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 440b6adf4512a       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago       Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	I0210 12:23:14.089645    5644 command_runner.go:130] > c5b854dbb9121       c69fa2e9cbf5f                                                                                         23 minutes ago       Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	I0210 12:23:14.089645    5644 command_runner.go:130] > 4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              23 minutes ago       Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	I0210 12:23:14.089645    5644 command_runner.go:130] > 148309413de8d       e29f9c7391fd9                                                                                         23 minutes ago       Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	I0210 12:23:14.089645    5644 command_runner.go:130] > adf520f9b9d78       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	I0210 12:23:14.089645    5644 command_runner.go:130] > 9408ce83d7d38       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	I0210 12:23:14.095613    5644 logs.go:123] Gathering logs for coredns [9240ce80f94c] ...
	I0210 12:23:14.095643    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9240ce80f94c"
	I0210 12:23:14.125030    5644 command_runner.go:130] > .:53
	I0210 12:23:14.125112    5644 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	I0210 12:23:14.125112    5644 command_runner.go:130] > CoreDNS-1.11.3
	I0210 12:23:14.125179    5644 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0210 12:23:14.125179    5644 command_runner.go:130] > [INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	I0210 12:23:14.125179    5644 logs.go:123] Gathering logs for kube-scheduler [440b6adf4512] ...
	I0210 12:23:14.125179    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 440b6adf4512"
	I0210 12:23:14.154258    5644 command_runner.go:130] ! I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:14.154579    5644 command_runner.go:130] ! W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0210 12:23:14.154706    5644 command_runner.go:130] ! W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:14.154706    5644 command_runner.go:130] ! I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:23:14.155955    5644 logs.go:123] Gathering logs for kube-proxy [6640b4e3d696] ...
	I0210 12:23:14.155955    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6640b4e3d696"
	I0210 12:23:14.184020    5644 command_runner.go:130] ! I0210 12:22:00.934266       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:14.184020    5644 command_runner.go:130] ! E0210 12:22:01.015806       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:14.184325    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.184325    5644 command_runner.go:130] !  >
	I0210 12:23:14.184379    5644 command_runner.go:130] ! E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.184379    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:14.184425    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:14.184425    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.184425    5644 command_runner.go:130] !  >
	I0210 12:23:14.184479    5644 command_runner.go:130] ! I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	I0210 12:23:14.184502    5644 command_runner.go:130] ! E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:14.184540    5644 command_runner.go:130] ! I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:14.184540    5644 command_runner.go:130] ! I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:14.184583    5644 command_runner.go:130] ! I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:14.184583    5644 command_runner.go:130] ! I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.184641    5644 command_runner.go:130] ! I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:23:14.184694    5644 command_runner.go:130] ! I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:14.184694    5644 command_runner.go:130] ! I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:14.184733    5644 command_runner.go:130] ! I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:14.184733    5644 command_runner.go:130] ! I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:23:14.184781    5644 command_runner.go:130] ! I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:14.184781    5644 command_runner.go:130] ! I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:14.184820    5644 command_runner.go:130] ! I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:14.184820    5644 command_runner.go:130] ! I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:14.187730    5644 logs.go:123] Gathering logs for kube-controller-manager [bd1666238ae6] ...
	I0210 12:23:14.187765    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bd1666238ae6"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:56.136957       1 serving.go:386] Generated self-signed cert in-memory
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.522140       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.522494       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.526750       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527225       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527482       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:21:57.527780       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.130437       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.131309       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141220       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141440       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.141453       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144469       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144719       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.144731       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152448       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152587       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.152599       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.158456       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.158611       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162098       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162345       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.162310       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.234708       1 shared_informer.go:320] Caches are synced for tokens
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.279835       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.279920       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0210 12:23:14.222446    5644 command_runner.go:130] ! I0210 12:22:00.284387       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0210 12:23:14.222967    5644 command_runner.go:130] ! I0210 12:22:00.284535       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0210 12:23:14.222967    5644 command_runner.go:130] ! I0210 12:22:00.284562       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0210 12:23:14.223015    5644 command_runner.go:130] ! I0210 12:22:00.327944       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0210 12:23:14.223015    5644 command_runner.go:130] ! I0210 12:22:00.330591       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0210 12:23:14.223048    5644 command_runner.go:130] ! I0210 12:22:00.327092       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0210 12:23:14.223092    5644 command_runner.go:130] ! I0210 12:22:00.346573       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0210 12:23:14.223092    5644 command_runner.go:130] ! I0210 12:22:00.346887       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0210 12:23:14.223138    5644 command_runner.go:130] ! I0210 12:22:00.347031       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0210 12:23:14.223138    5644 command_runner.go:130] ! I0210 12:22:00.347049       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0210 12:23:14.223185    5644 command_runner.go:130] ! I0210 12:22:00.351852       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0210 12:23:14.223185    5644 command_runner.go:130] ! I0210 12:22:00.351879       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0210 12:23:14.223230    5644 command_runner.go:130] ! I0210 12:22:00.351888       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0210 12:23:14.223230    5644 command_runner.go:130] ! I0210 12:22:00.354359       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0210 12:23:14.223278    5644 command_runner.go:130] ! I0210 12:22:00.354950       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0210 12:23:14.223278    5644 command_runner.go:130] ! I0210 12:22:00.356835       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0210 12:23:14.223323    5644 command_runner.go:130] ! I0210 12:22:00.356898       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0210 12:23:14.223323    5644 command_runner.go:130] ! I0210 12:22:00.357416       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0210 12:23:14.223370    5644 command_runner.go:130] ! I0210 12:22:00.366037       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0210 12:23:14.223370    5644 command_runner.go:130] ! I0210 12:22:00.367715       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0210 12:23:14.223414    5644 command_runner.go:130] ! I0210 12:22:00.367737       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0210 12:23:14.223414    5644 command_runner.go:130] ! I0210 12:22:00.403903       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0210 12:23:14.223460    5644 command_runner.go:130] ! I0210 12:22:00.403962       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0210 12:23:14.223460    5644 command_runner.go:130] ! I0210 12:22:00.403986       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0210 12:23:14.223504    5644 command_runner.go:130] ! W0210 12:22:00.404002       1 shared_informer.go:597] resyncPeriod 20h28m18.826536572s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:14.223551    5644 command_runner.go:130] ! I0210 12:22:00.404054       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0210 12:23:14.223596    5644 command_runner.go:130] ! I0210 12:22:00.404070       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0210 12:23:14.223643    5644 command_runner.go:130] ! I0210 12:22:00.404083       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0210 12:23:14.223643    5644 command_runner.go:130] ! I0210 12:22:00.404215       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0210 12:23:14.223687    5644 command_runner.go:130] ! I0210 12:22:00.404325       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0210 12:23:14.223728    5644 command_runner.go:130] ! I0210 12:22:00.404361       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0210 12:23:14.223766    5644 command_runner.go:130] ! W0210 12:22:00.404375       1 shared_informer.go:597] resyncPeriod 19h58m52.828542411s is smaller than resyncCheckPeriod 23h57m31.932623877s and the informer has already started. Changing it to 23h57m31.932623877s
	I0210 12:23:14.223793    5644 command_runner.go:130] ! I0210 12:22:00.404428       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0210 12:23:14.223833    5644 command_runner.go:130] ! I0210 12:22:00.404501       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0210 12:23:14.223874    5644 command_runner.go:130] ! I0210 12:22:00.404548       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0210 12:23:14.223960    5644 command_runner.go:130] ! I0210 12:22:00.404581       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0210 12:23:14.223993    5644 command_runner.go:130] ! I0210 12:22:00.404616       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0210 12:23:14.224043    5644 command_runner.go:130] ! I0210 12:22:00.405026       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0210 12:23:14.224043    5644 command_runner.go:130] ! I0210 12:22:00.405085       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0210 12:23:14.224074    5644 command_runner.go:130] ! I0210 12:22:00.405102       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0210 12:23:14.224143    5644 command_runner.go:130] ! I0210 12:22:00.405117       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0210 12:23:14.224176    5644 command_runner.go:130] ! I0210 12:22:00.405133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0210 12:23:14.224176    5644 command_runner.go:130] ! I0210 12:22:00.405155       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0210 12:23:14.224224    5644 command_runner.go:130] ! I0210 12:22:00.407446       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0210 12:23:14.224270    5644 command_runner.go:130] ! I0210 12:22:00.407747       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0210 12:23:14.224270    5644 command_runner.go:130] ! I0210 12:22:00.407814       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.224300    5644 command_runner.go:130] ! I0210 12:22:00.408146       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0210 12:23:14.224318    5644 command_runner.go:130] ! I0210 12:22:00.416214       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.416425       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.417001       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0210 12:23:14.224351    5644 command_runner.go:130] ! I0210 12:22:00.418614       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.448143       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.448205       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0210 12:23:14.224400    5644 command_runner.go:130] ! I0210 12:22:00.453507       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.453526       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457427       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457525       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.457536       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461217       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461528       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.461540       1 shared_informer.go:313] Waiting for caches to sync for job
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.473609       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.473750       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.476529       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478245       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478384       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.478413       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.486564       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.490692       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.490721       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.491067       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.491429       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.492232       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.495646       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.500509       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.500524       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515593       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515770       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515782       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.515950       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525570       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525594       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525618       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.525997       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.526011       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0210 12:23:14.224534    5644 command_runner.go:130] ! I0210 12:22:00.526038       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526889       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526935       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526945       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.225062    5644 command_runner.go:130] ! I0210 12:22:00.526972       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0210 12:23:14.225114    5644 command_runner.go:130] ! I0210 12:22:00.526980       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.225148    5644 command_runner.go:130] ! I0210 12:22:00.527008       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225170    5644 command_runner.go:130] ! I0210 12:22:00.527135       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0210 12:23:14.225198    5644 command_runner.go:130] ! W0210 12:22:00.695736       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0210 12:23:14.225260    5644 command_runner.go:130] ! I0210 12:22:00.710455       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0210 12:23:14.225288    5644 command_runner.go:130] ! I0210 12:22:00.710510       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0210 12:23:14.225288    5644 command_runner.go:130] ! I0210 12:22:00.710723       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0210 12:23:14.225322    5644 command_runner.go:130] ! I0210 12:22:00.710737       1 shared_informer.go:313] Waiting for caches to sync for node
	I0210 12:23:14.225322    5644 command_runner.go:130] ! I0210 12:22:00.739126       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0210 12:23:14.225361    5644 command_runner.go:130] ! I0210 12:22:00.739307       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0210 12:23:14.225361    5644 command_runner.go:130] ! I0210 12:22:00.739552       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0210 12:23:14.225396    5644 command_runner.go:130] ! I0210 12:22:00.739769       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0210 12:23:14.225435    5644 command_runner.go:130] ! I0210 12:22:00.739879       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0210 12:23:14.225435    5644 command_runner.go:130] ! I0210 12:22:00.790336       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0210 12:23:14.225470    5644 command_runner.go:130] ! I0210 12:22:00.790542       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0210 12:23:14.225509    5644 command_runner.go:130] ! I0210 12:22:00.790764       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0210 12:23:14.225509    5644 command_runner.go:130] ! I0210 12:22:00.790827       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0210 12:23:14.225544    5644 command_runner.go:130] ! I0210 12:22:00.837132       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0210 12:23:14.225544    5644 command_runner.go:130] ! I0210 12:22:00.837610       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0210 12:23:14.225584    5644 command_runner.go:130] ! I0210 12:22:00.838001       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0210 12:23:14.225584    5644 command_runner.go:130] ! I0210 12:22:00.838149       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0210 12:23:14.225618    5644 command_runner.go:130] ! I0210 12:22:00.889036       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0210 12:23:14.225658    5644 command_runner.go:130] ! I0210 12:22:00.889446       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0210 12:23:14.225658    5644 command_runner.go:130] ! I0210 12:22:00.889702       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0210 12:23:14.225692    5644 command_runner.go:130] ! I0210 12:22:00.947566       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0210 12:23:14.225692    5644 command_runner.go:130] ! I0210 12:22:00.947979       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0210 12:23:14.225731    5644 command_runner.go:130] ! I0210 12:22:00.948130       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0210 12:23:14.225731    5644 command_runner.go:130] ! I0210 12:22:00.948247       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0210 12:23:14.225766    5644 command_runner.go:130] ! I0210 12:22:00.998978       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0210 12:23:14.225805    5644 command_runner.go:130] ! I0210 12:22:00.999002       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0210 12:23:14.225805    5644 command_runner.go:130] ! I0210 12:22:00.999105       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0210 12:23:14.225841    5644 command_runner.go:130] ! I0210 12:22:00.999117       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0210 12:23:14.225841    5644 command_runner.go:130] ! I0210 12:22:01.040388       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:14.225880    5644 command_runner.go:130] ! I0210 12:22:01.040661       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0210 12:23:14.225916    5644 command_runner.go:130] ! I0210 12:22:01.041004       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0210 12:23:14.225916    5644 command_runner.go:130] ! I0210 12:22:01.087635       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0210 12:23:14.225955    5644 command_runner.go:130] ! I0210 12:22:01.088431       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0210 12:23:14.225955    5644 command_runner.go:130] ! I0210 12:22:01.088403       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0210 12:23:14.225990    5644 command_runner.go:130] ! I0210 12:22:01.088651       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0210 12:23:14.225990    5644 command_runner.go:130] ! I0210 12:22:01.088700       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0210 12:23:14.226029    5644 command_runner.go:130] ! I0210 12:22:01.140802       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0210 12:23:14.226029    5644 command_runner.go:130] ! I0210 12:22:01.140881       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0210 12:23:14.226064    5644 command_runner.go:130] ! I0210 12:22:01.140893       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0210 12:23:14.226104    5644 command_runner.go:130] ! I0210 12:22:01.188353       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0210 12:23:14.226139    5644 command_runner.go:130] ! I0210 12:22:01.188708       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0210 12:23:14.226139    5644 command_runner.go:130] ! I0210 12:22:01.188662       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0210 12:23:14.226179    5644 command_runner.go:130] ! I0210 12:22:01.189570       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0210 12:23:14.226179    5644 command_runner.go:130] ! I0210 12:22:01.238308       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0210 12:23:14.226214    5644 command_runner.go:130] ! I0210 12:22:01.239287       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0210 12:23:14.226214    5644 command_runner.go:130] ! I0210 12:22:01.239614       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0210 12:23:14.226253    5644 command_runner.go:130] ! I0210 12:22:01.290486       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.297980       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.298004       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.330472       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.360391       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.379524       1 shared_informer.go:320] Caches are synced for expand
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.412039       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.427926       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.429792       1 shared_informer.go:320] Caches are synced for cronjob
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.431083       1 shared_informer.go:320] Caches are synced for namespace
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.433127       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.438586       1 shared_informer.go:320] Caches are synced for endpoint
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455792       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.443963       1 shared_informer.go:320] Caches are synced for deployment
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.458494       1 shared_informer.go:320] Caches are synced for GC
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.458605       1 shared_informer.go:320] Caches are synced for crt configmap
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.462564       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.463137       1 shared_informer.go:320] Caches are synced for job
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.470663       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454359       1 shared_informer.go:320] Caches are synced for PVC protection
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454660       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454672       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.454682       1 shared_informer.go:320] Caches are synced for disruption
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455335       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455353       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455645       1 shared_informer.go:320] Caches are synced for TTL
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455857       1 shared_informer.go:320] Caches are synced for taint
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.479260       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.455957       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.480860       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.471787       1 shared_informer.go:320] Caches are synced for ephemeral
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.488921       1 shared_informer.go:320] Caches are synced for HPA
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489141       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489425       1 shared_informer.go:320] Caches are synced for service account
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.489837       1 shared_informer.go:320] Caches are synced for PV protection
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.490060       1 shared_informer.go:320] Caches are synced for daemon sets
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492366       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492536       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492675       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.492787       1 shared_informer.go:320] Caches are synced for attach detach
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.498224       1 shared_informer.go:320] Caches are synced for stateful set
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.499494       1 shared_informer.go:320] Caches are synced for persistent volume
	I0210 12:23:14.226275    5644 command_runner.go:130] ! I0210 12:22:01.515907       1 shared_informer.go:320] Caches are synced for garbage collector
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.518475       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.518619       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0210 12:23:14.226799    5644 command_runner.go:130] ! I0210 12:22:01.517754       1 shared_informer.go:320] Caches are synced for node
	I0210 12:23:14.226859    5644 command_runner.go:130] ! I0210 12:22:01.519209       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0210 12:23:14.226859    5644 command_runner.go:130] ! I0210 12:22:01.519352       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.517867       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521228       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521505       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521756       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.521924       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522649       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522926       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.523055       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.522650       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.523304       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.526544       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.526740       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.527233       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.527235       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.531258       1 shared_informer.go:320] Caches are synced for resource quota
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.620608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.660535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.150017ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.660786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="196.91µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.669840       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="192.074947ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:01.679112       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="47.103µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:11.608842       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.026601       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.027936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:49.051398       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.552649       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.561524       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.579437       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.629083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="17.615623ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:51.629955       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="714.433µs"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:22:56.656809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:23:04.379320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="10.532877ms"
	I0210 12:23:14.226897    5644 command_runner.go:130] ! I0210 12:23:04.379580       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="104.602µs"
	I0210 12:23:14.227427    5644 command_runner.go:130] ! I0210 12:23:04.418725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="58.001µs"
	I0210 12:23:14.227476    5644 command_runner.go:130] ! I0210 12:23:04.463938       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.341175ms"
	I0210 12:23:14.227476    5644 command_runner.go:130] ! I0210 12:23:04.464695       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="51.6µs"
	I0210 12:23:14.243827    5644 logs.go:123] Gathering logs for describe nodes ...
	I0210 12:23:14.243827    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0210 12:23:14.436064    5644 command_runner.go:130] > Name:               multinode-032400
	I0210 12:23:14.436064    5644 command_runner.go:130] > Roles:              control-plane
	I0210 12:23:14.436064    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.436064    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	I0210 12:23:14.436273    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0210 12:23:14.436341    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.436341    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.436341    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	I0210 12:23:14.436341    5644 command_runner.go:130] > Taints:             <none>
	I0210 12:23:14.436433    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.436433    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.436537    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400
	I0210 12:23:14.436537    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.436537    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:23:09 +0000
	I0210 12:23:14.436537    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.436537    5644 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0210 12:23:14.436537    5644 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0210 12:23:14.436644    5644 command_runner.go:130] >   MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0210 12:23:14.436644    5644 command_runner.go:130] >   DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0210 12:23:14.436644    5644 command_runner.go:130] >   PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0210 12:23:14.436644    5644 command_runner.go:130] >   Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	I0210 12:23:14.436644    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.436644    5644 command_runner.go:130] >   InternalIP:  172.29.129.181
	I0210 12:23:14.436644    5644 command_runner.go:130] >   Hostname:    multinode-032400
	I0210 12:23:14.436644    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.436644    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.436769    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.436769    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.436769    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.436769    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.436769    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.436769    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.436769    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.436769    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.436769    5644 command_runner.go:130] >   Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	I0210 12:23:14.436769    5644 command_runner.go:130] >   System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	I0210 12:23:14.436769    5644 command_runner.go:130] >   Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:14.436877    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.436877    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.436901    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.436985    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.436985    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.436985    5644 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0210 12:23:14.437050    5644 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0210 12:23:14.437050    5644 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0210 12:23:14.437050    5644 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.437091    5644 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.437091    5644 command_runner.go:130] >   default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:14.437091    5644 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437198    5644 command_runner.go:130] >   kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437291    5644 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0210 12:23:14.437291    5644 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	I0210 12:23:14.437291    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.437291    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   Resource           Requests     Limits
	I0210 12:23:14.437291    5644 command_runner.go:130] >   --------           --------     ------
	I0210 12:23:14.437291    5644 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0210 12:23:14.437291    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0210 12:23:14.437426    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0210 12:23:14.437426    5644 command_runner.go:130] > Events:
	I0210 12:23:14.437426    5644 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0210 12:23:14.437426    5644 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0210 12:23:14.437426    5644 command_runner.go:130] >   Normal   Starting                 23m                kube-proxy       
	I0210 12:23:14.437507    5644 command_runner.go:130] >   Normal   Starting                 73s                kube-proxy       
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437528    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   NodeReady                23m                kubelet          Node multinode-032400 status is now: NodeReady
	I0210 12:23:14.437619    5644 command_runner.go:130] >   Normal   Starting                 82s                kubelet          Starting kubelet.
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	I0210 12:23:14.437708    5644 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Normal   NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Warning  Rebooted                 76s                kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	I0210 12:23:14.437753    5644 command_runner.go:130] >   Normal   RegisteredNode           73s                node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	I0210 12:23:14.437753    5644 command_runner.go:130] > Name:               multinode-032400-m02
	I0210 12:23:14.437753    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:14.437828    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m02
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_02_24_0700
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.437828    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.437828    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.437953    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.437953    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:02:24 +0000
	I0210 12:23:14.437975    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:14.438008    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:14.438008    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.438008    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.438038    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m02
	I0210 12:23:14.438038    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.438038    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:56 +0000
	I0210 12:23:14.438038    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.438038    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:14.438038    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:14.438038    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:18:23 +0000   Mon, 10 Feb 2025 12:22:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.438129    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.438129    5644 command_runner.go:130] >   InternalIP:  172.29.143.51
	I0210 12:23:14.438129    5644 command_runner.go:130] >   Hostname:    multinode-032400-m02
	I0210 12:23:14.438129    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.438202    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.438202    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.438202    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.438232    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.438232    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.438232    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.438232    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.438232    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.438278    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.438278    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.438278    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.438278    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.438278    5644 command_runner.go:130] >   Machine ID:                 21b7ab6b12a24fe4a174e762e13ffd68
	I0210 12:23:14.438278    5644 command_runner.go:130] >   System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Boot ID:                    a25897cf-a5ca-424f-a707-4b03f1b1442d
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.438370    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.438370    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.438467    5644 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0210 12:23:14.438467    5644 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0210 12:23:14.438467    5644 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0210 12:23:14.438467    5644 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.438527    5644 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.439532    5644 command_runner.go:130] >   default                     busybox-58667487b6-4g8jw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	I0210 12:23:14.439618    5644 command_runner.go:130] >   kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      20m
	I0210 12:23:14.440021    5644 command_runner.go:130] >   kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0210 12:23:14.440021    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.440021    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.440021    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:14.440021    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:14.440135    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:14.440135    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:14.440240    5644 command_runner.go:130] > Events:
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0210 12:23:14.440240    5644 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  Starting                 20m                kube-proxy       
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	I0210 12:23:14.440240    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     20m (x2 over 20m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  RegisteredNode           20m                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeReady                20m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  RegisteredNode           73s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	I0210 12:23:14.440323    5644 command_runner.go:130] >   Normal  NodeNotReady             23s                node-controller  Node multinode-032400-m02 status is now: NodeNotReady
	I0210 12:23:14.440323    5644 command_runner.go:130] > Name:               multinode-032400-m03
	I0210 12:23:14.440426    5644 command_runner.go:130] > Roles:              <none>
	I0210 12:23:14.440426    5644 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/hostname=multinode-032400-m03
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     kubernetes.io/os=linux
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	I0210 12:23:14.440426    5644 command_runner.go:130] >                     minikube.k8s.io/name=multinode-032400
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	I0210 12:23:14.440517    5644 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0210 12:23:14.440549    5644 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0210 12:23:14.440549    5644 command_runner.go:130] > CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	I0210 12:23:14.440549    5644 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0210 12:23:14.440549    5644 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0210 12:23:14.440549    5644 command_runner.go:130] > Unschedulable:      false
	I0210 12:23:14.440549    5644 command_runner.go:130] > Lease:
	I0210 12:23:14.440549    5644 command_runner.go:130] >   HolderIdentity:  multinode-032400-m03
	I0210 12:23:14.440549    5644 command_runner.go:130] >   AcquireTime:     <unset>
	I0210 12:23:14.440642    5644 command_runner.go:130] >   RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	I0210 12:23:14.440642    5644 command_runner.go:130] > Conditions:
	I0210 12:23:14.440642    5644 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0210 12:23:14.440642    5644 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0210 12:23:14.440642    5644 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440642    5644 command_runner.go:130] >   DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440730    5644 command_runner.go:130] >   PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.440763    5644 command_runner.go:130] >   Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0210 12:23:14.441204    5644 command_runner.go:130] > Addresses:
	I0210 12:23:14.441204    5644 command_runner.go:130] >   InternalIP:  172.29.129.10
	I0210 12:23:14.441204    5644 command_runner.go:130] >   Hostname:    multinode-032400-m03
	I0210 12:23:14.441204    5644 command_runner.go:130] > Capacity:
	I0210 12:23:14.441273    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.441273    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.441306    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.441306    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.441306    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.441333    5644 command_runner.go:130] > Allocatable:
	I0210 12:23:14.441333    5644 command_runner.go:130] >   cpu:                2
	I0210 12:23:14.441333    5644 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0210 12:23:14.441333    5644 command_runner.go:130] >   hugepages-2Mi:      0
	I0210 12:23:14.441333    5644 command_runner.go:130] >   memory:             2164264Ki
	I0210 12:23:14.441333    5644 command_runner.go:130] >   pods:               110
	I0210 12:23:14.441333    5644 command_runner.go:130] > System Info:
	I0210 12:23:14.441333    5644 command_runner.go:130] >   Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	I0210 12:23:14.441333    5644 command_runner.go:130] >   System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	I0210 12:23:14.441333    5644 command_runner.go:130] >   Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kernel Version:             5.10.207
	I0210 12:23:14.441438    5644 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Operating System:           linux
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Architecture:               amd64
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0210 12:23:14.441438    5644 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0210 12:23:14.441438    5644 command_runner.go:130] > PodCIDR:                      10.244.4.0/24
	I0210 12:23:14.441438    5644 command_runner.go:130] > PodCIDRs:                     10.244.4.0/24
	I0210 12:23:14.441529    5644 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0210 12:23:14.441529    5644 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0210 12:23:14.441529    5644 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0210 12:23:14.441564    5644 command_runner.go:130] >   kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0210 12:23:14.441564    5644 command_runner.go:130] >   kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0210 12:23:14.441564    5644 command_runner.go:130] > Allocated resources:
	I0210 12:23:14.441564    5644 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0210 12:23:14.441564    5644 command_runner.go:130] >   Resource           Requests   Limits
	I0210 12:23:14.441651    5644 command_runner.go:130] >   --------           --------   ------
	I0210 12:23:14.441651    5644 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0210 12:23:14.441651    5644 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0210 12:23:14.441727    5644 command_runner.go:130] > Events:
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0210 12:23:14.441727    5644 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Normal  Starting                 15m                    kube-proxy       
	I0210 12:23:14.441727    5644 command_runner.go:130] >   Normal  Starting                 5m39s                  kube-proxy       
	I0210 12:23:14.441793    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.441793    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:14.441835    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeReady                15m                    kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m43s (x2 over 5m44s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  RegisteredNode           5m42s                  node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeReady                5m28s                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	I0210 12:23:14.441958    5644 command_runner.go:130] >   Normal  NodeNotReady             3m47s                  node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	I0210 12:23:14.442144    5644 command_runner.go:130] >   Normal  RegisteredNode           73s                    node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	I0210 12:23:14.452880    5644 logs.go:123] Gathering logs for etcd [2c0b97381825] ...
	I0210 12:23:14.452880    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c0b97381825"
	I0210 12:23:14.488027    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704341Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:14.488599    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704447Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.129.181:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.129.181:2380","--initial-cluster=multinode-032400=https://172.29.129.181:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.129.181:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.129.181:2380","--name=multinode-032400","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0210 12:23:14.488711    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704520Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0210 12:23:14.488754    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.704892Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0210 12:23:14.488754    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704933Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.129.181:2380"]}
	I0210 12:23:14.488802    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.704972Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:14.488852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.708617Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"]}
	I0210 12:23:14.488995    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.709796Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-032400","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0210 12:23:14.489038    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.729354Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"16.974017ms"}
	I0210 12:23:14.489038    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.755049Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0210 12:23:14.489084    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.785036Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","commit-index":2031}
	I0210 12:23:14.489134    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786308Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=()"}
	I0210 12:23:14.489180    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786538Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became follower at term 2"}
	I0210 12:23:14.489221    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.786684Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 9ecc865dcee1fe8f [peers: [], term: 2, commit: 2031, applied: 0, lastindex: 2031, lastterm: 2]"}
	I0210 12:23:14.489221    5644 command_runner.go:130] ! {"level":"warn","ts":"2025-02-10T12:21:54.799505Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0210 12:23:14.489267    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.805220Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1385}
	I0210 12:23:14.489314    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.819723Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1757}
	I0210 12:23:14.489360    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.831867Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0210 12:23:14.489360    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.839898Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"9ecc865dcee1fe8f","timeout":"7s"}
	I0210 12:23:14.489401    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841678Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"9ecc865dcee1fe8f"}
	I0210 12:23:14.489446    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.841933Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"9ecc865dcee1fe8f","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0210 12:23:14.489495    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.842749Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0210 12:23:14.489495    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.844230Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489581    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.846545Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0210 12:23:14.489676    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.847568Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"9ecc865dcee1fe8f","initial-advertise-peer-urls":["https://172.29.129.181:2380"],"listen-peer-urls":["https://172.29.129.181:2380"],"advertise-client-urls":["https://172.29.129.181:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.129.181:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0210 12:23:14.489723    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848293Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0210 12:23:14.489764    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0210 12:23:14.489803    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0210 12:23:14.489852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0210 12:23:14.489852    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	I0210 12:23:14.489943    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0210 12:23:14.489961    5644 command_runner.go:130] ! {"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	I0210 12:23:14.498219    5644 logs.go:123] Gathering logs for kube-proxy [148309413de8] ...
	I0210 12:23:14.498755    5644 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 148309413de8"
	I0210 12:23:14.525515    5644 command_runner.go:130] ! I0210 11:59:18.625067       1 server_linux.go:66] "Using iptables proxy"
	I0210 12:23:14.525982    5644 command_runner.go:130] ! E0210 11:59:18.658116       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.526017    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0210 12:23:14.526065    5644 command_runner.go:130] ! 	add table ip kube-proxy
	I0210 12:23:14.526065    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.526065    5644 command_runner.go:130] !  >
	I0210 12:23:14.526101    5644 command_runner.go:130] ! E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0210 12:23:14.526101    5644 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0210 12:23:14.526147    5644 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0210 12:23:14.526147    5644 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0210 12:23:14.526147    5644 command_runner.go:130] !  >
	I0210 12:23:14.526147    5644 command_runner.go:130] ! I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	I0210 12:23:14.526188    5644 command_runner.go:130] ! E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:23:14.526188    5644 command_runner.go:130] ! I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:23:14.526241    5644 command_runner.go:130] ! I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:23:14.526241    5644 command_runner.go:130] ! I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:23:14.526329    5644 command_runner.go:130] ! I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:23:14.526329    5644 command_runner.go:130] ! I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 12:23:14.526371    5644 command_runner.go:130] ! I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:23:14.526425    5644 command_runner.go:130] ! I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:23:14.526425    5644 command_runner.go:130] ! I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:23:14.526467    5644 command_runner.go:130] ! I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 12:23:14.526513    5644 command_runner.go:130] ! I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:23:14.526513    5644 command_runner.go:130] ! I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:23:14.526554    5644 command_runner.go:130] ! I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:23:14.526554    5644 command_runner.go:130] ! I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	I0210 12:23:14.530209    5644 logs.go:123] Gathering logs for Docker ...
	I0210 12:23:14.530260    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0210 12:23:14.561305    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.561370    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.561424    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.561424    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.561540    5644 command_runner.go:130] > Feb 10 12:20:33 minikube cri-dockerd[223]: time="2025-02-10T12:20:33Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.561540    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.561609    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.561672    5644 command_runner.go:130] > Feb 10 12:20:33 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.561734    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.561798    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.561798    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.561859    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.561921    5644 command_runner.go:130] > Feb 10 12:20:36 minikube cri-dockerd[417]: time="2025-02-10T12:20:36Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.561976    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.562037    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562094    5644 command_runner.go:130] > Feb 10 12:20:36 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562154    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0210 12:23:14.562154    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562211    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.562211    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.562272    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.562337    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.562397    5644 command_runner.go:130] > Feb 10 12:20:38 minikube cri-dockerd[425]: time="2025-02-10T12:20:38Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0210 12:23:14.562452    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0210 12:23:14.562452    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562512    5644 command_runner.go:130] > Feb 10 12:20:38 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562578    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0210 12:23:14.562578    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562638    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0210 12:23:14.562693    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0210 12:23:14.562693    5644 command_runner.go:130] > Feb 10 12:20:40 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.562755    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:14.562812    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.226981799Z" level=info msg="Starting up"
	I0210 12:23:14.562872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.228905904Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:14.562872    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[652]: time="2025-02-10T12:21:19.229983406Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	I0210 12:23:14.562937    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.261668386Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:14.562998    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289760856Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:14.563055    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289873057Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:14.563115    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289938357Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:14.563172    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.289955257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563233    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290688059Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563233    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.290855359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563288    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291046360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563349    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291150260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563403    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291171360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563463    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291182560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563520    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.291676861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563520    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.292369263Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563581    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300517383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563646    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300550484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.563765    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300790784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.563827    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.300846284Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:14.563891    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301486786Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:14.563891    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.301530786Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:14.563954    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306800699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:14.564012    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306938800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:14.564073    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306962400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:14.564073    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306982400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:14.564133    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.306998000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:14.564195    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307070900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:14.564254    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307354201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.564316    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307779102Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.564375    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307803302Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:14.564375    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307819902Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:14.564437    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307835502Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564494    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307854902Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564563    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307868302Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564620    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307886902Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564683    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307903802Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564743    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307918302Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564743    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307933302Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564804    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307946902Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.564861    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307973202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.564922    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.307988502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.564977    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308002102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565036    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308018302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565092    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308031702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565151    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308046102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565206    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308058902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565265    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308073102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565265    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308088402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565322    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308111803Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565382    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308139203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565437    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308154703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565497    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308168203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565497    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308185103Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:14.565563    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308206703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565622    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308220903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.565677    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308233503Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:14.565737    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308287903Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:14.565797    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308326803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:14.565858    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308340203Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:14.565918    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308354603Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:14.566025    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308366403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.566071    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308381203Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:14.566136    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308392603Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:14.566136    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308672504Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:14.566196    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308811104Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:14.566196    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308872804Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:14.566266    5644 command_runner.go:130] > Feb 10 12:21:19 multinode-032400 dockerd[658]: time="2025-02-10T12:21:19.308911105Z" level=info msg="containerd successfully booted in 0.050730s"
	I0210 12:23:14.566325    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.282476810Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:14.566381    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.530993194Z" level=info msg="Loading containers: start."
	I0210 12:23:14.566441    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.796529619Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:14.566496    5644 command_runner.go:130] > Feb 10 12:21:20 multinode-032400 dockerd[652]: time="2025-02-10T12:21:20.946848197Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:14.566496    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.063713732Z" level=info msg="Loading containers: done."
	I0210 12:23:14.566557    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090121636Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0210 12:23:14.566612    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090236272Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0210 12:23:14.566671    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090266381Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:14.566728    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.090811448Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:14.566728    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.131876651Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:14.566791    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 dockerd[652]: time="2025-02-10T12:21:21.132103020Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:14.566791    5644 command_runner.go:130] > Feb 10 12:21:21 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:14.566849    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.024556788Z" level=info msg="Processing signal 'terminated'"
	I0210 12:23:14.566909    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.027219616Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0210 12:23:14.566965    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 systemd[1]: Stopping Docker Application Container Engine...
	I0210 12:23:14.566965    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028493777Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0210 12:23:14.567079    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.028923098Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0210 12:23:14.567079    5644 command_runner.go:130] > Feb 10 12:21:45 multinode-032400 dockerd[652]: time="2025-02-10T12:21:45.029499825Z" level=info msg="Daemon shutdown complete"
	I0210 12:23:14.567138    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: docker.service: Deactivated successfully.
	I0210 12:23:14.567138    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Stopped Docker Application Container Engine.
	I0210 12:23:14.567203    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 systemd[1]: Starting Docker Application Container Engine...
	I0210 12:23:14.567203    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084081094Z" level=info msg="Starting up"
	I0210 12:23:14.567263    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.084976538Z" level=info msg="containerd not running, starting managed containerd"
	I0210 12:23:14.567320    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:46.085890382Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1108
	I0210 12:23:14.567381    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.115367801Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0210 12:23:14.567446    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141577962Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0210 12:23:14.567495    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141694568Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0210 12:23:14.567532    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141841575Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0210 12:23:14.567581    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141861576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567636    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141895578Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567684    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.141908978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567738    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142072686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567799    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142222293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.567892    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142244195Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.567952    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142261595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568003    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142290097Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568058    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.142407302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568109    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145701161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.568164    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145822967Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0210 12:23:14.568215    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.145984775Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0210 12:23:14.568300    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146081579Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0210 12:23:14.568353    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146115481Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0210 12:23:14.568403    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146134282Z" level=info msg="metadata content store policy set" policy=shared
	I0210 12:23:14.568459    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146552002Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0210 12:23:14.568511    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146601004Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0210 12:23:14.568511    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146617705Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0210 12:23:14.568567    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146633006Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0210 12:23:14.568617    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146647807Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0210 12:23:14.568725    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.146697109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0210 12:23:14.568781    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147110429Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.568833    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147324539Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0210 12:23:14.568887    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147423444Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0210 12:23:14.568937    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147441845Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0210 12:23:14.568937    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147456345Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569004    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147470646Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569064    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147499048Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569121    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147516448Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569170    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147532049Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147546750Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147559350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147573151Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147593252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147608153Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147634954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147654755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147668856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147683556Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147697257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147710658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147724858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147802262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147821763Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147834964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147859465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147878466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147900267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147914067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.147927668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148050374Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148087376Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0210 12:23:14.569224    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148100476Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0210 12:23:14.569767    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148113477Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0210 12:23:14.569767    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148124578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148138778Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148151679Z" level=info msg="NRI interface is disabled by configuration."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.148991719Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149071923Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149146027Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:46 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:46.149657651Z" level=info msg="containerd successfully booted in 0.035320s"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.124814897Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.155572178Z" level=info msg="Loading containers: start."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.380096187Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.494116276Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.609502830Z" level=info msg="Loading containers: done."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634336526Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.634493434Z" level=info msg="Daemon has completed initialization"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668508371Z" level=info msg="API listen on /var/run/docker.sock"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 dockerd[1101]: time="2025-02-10T12:21:47.668715581Z" level=info msg="API listen on [::]:2376"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:47 multinode-032400 systemd[1]: Started Docker Application Container Engine.
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start docker client with request timeout 0s"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Loaded network plugin cni"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0210 12:23:14.569891    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:48Z" level=info msg="Start cri-dockerd grpc backend"
	I0210 12:23:14.570434    5644 command_runner.go:130] > Feb 10 12:21:48 multinode-032400 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0210 12:23:14.570434    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-8shfg_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ed034f1578c961d966543fd6901ac487893b2d9c55235293f852b6eba2ffef59\""
	I0210 12:23:14.570534    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-w8rr9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"794995bca6b5b46f3dd1b76ae2e6fa45046bd172772508f61b870824d72e297b\""
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688319673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688604987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.688649189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.689336722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785048930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785211338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.785249040Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.787201934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8059b20f65945591b4ecc2d3aa8b6e119909c5a5c01922ce471ced5e88f22c37/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.859964137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.860819978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861045089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.861827326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866236838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.866716362Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.867048178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:53.870617949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:53 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/016ad4d720680495a67c18e1390ee8683611cb3b95ee6ded4cb744a3ca3655d5/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ae5696c38864ac99a03d829d566b6a832f69523032ff0af02300ad95789380ce/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9c3e574a334980f77de3f0fd8bd1af8a3597c32a3c5f9d94fec925b6f3c76d4e/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.570576    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.054858919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571114    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055041728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571163    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055266639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.055571653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351555902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351618605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351631706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.351796314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356626447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356728951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.356756153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.357270278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400696468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.400993282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401148890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:54 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:54.401585911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:58 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586724531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586851637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.586897839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.587096549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622779367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622857870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.622884072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.623098482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.638867841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639329463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639489271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.639867989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b9afdceca416df5c16e84b3e0c78f25ca1fa77413c28fe48e1fe1aceabb91c44/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:21:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bd998e6ebeb39ea743489a0e4d48c282d6e8da289cd8341c4c5099d9836e6f73/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937150501Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937256006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937275107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:21:59 multinode-032400 dockerd[1108]: time="2025-02-10T12:21:59.937381912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025525655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025702264Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.025767267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.026050381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:22:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e5b54589cf0f1effd7254987c6ce12359f402596b5494fa5c8f0bf296c219b89/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385763898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385836401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.571209    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385859502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:00 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:00.385961307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1101]: time="2025-02-10T12:22:30.686630853Z" level=info msg="ignoring event" container=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:14.572164    5644 command_runner.go:130] > Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0210 12:23:17.102810    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.102894    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:23:17.102894    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.103022    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.103022    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.107460    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:17.107460    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.107551    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.107551    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Audit-Id: c55d65cc-0aaf-4210-b363-41902862e56b
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.107551    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.112916    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e8 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  35 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |5....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309986 chars]
	 >
	I0210 12:23:17.112975    5644 system_pods.go:59] 12 kube-system pods found
	I0210 12:23:17.113498    5644 system_pods.go:61] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:23:17.113558    5644 system_pods.go:61] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 12:23:17.113595    5644 system_pods.go:61] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 12:23:17.113595    5644 system_pods.go:74] duration metric: took 3.7301832s to wait for pod list to return data ...
	I0210 12:23:17.113652    5644 default_sa.go:34] waiting for default service account to be created ...
	I0210 12:23:17.113751    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.113842    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/default/serviceaccounts
	I0210 12:23:17.113871    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.113926    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.113926    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.117684    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:17.117684    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.117767    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.117767    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Content-Length: 129
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.117767    5644 round_trippers.go:587]     Audit-Id: e369e7f6-226a-465d-a481-2c18c67e8037
	I0210 12:23:17.117843    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 31 39  38 35 1a 00 12 4f 0a 4d  |......1985...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  34 61 64 66 62 64 33 35  |ault".*$4adfbd35|
		00000050  2d 66 38 62 36 2d 34 36  30 66 2d 38 38 65 39 2d  |-f8b6-460f-88e9-|
		00000060  65 37 34 63 34 36 62 30  32 66 30 65 32 03 33 33  |e74c46b02f0e2.33|
		00000070  36 38 00 42 08 08 90 d4  a7 bd 06 10 00 1a 00 22  |68.B..........."|
		00000080  00                                                |.|
	 >
	I0210 12:23:17.117907    5644 default_sa.go:45] found service account: "default"
	I0210 12:23:17.117907    5644 default_sa.go:55] duration metric: took 4.255ms for default service account to be created ...
	I0210 12:23:17.117907    5644 system_pods.go:116] waiting for k8s-apps to be running ...
	I0210 12:23:17.117974    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.118046    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:23:17.118046    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.118046    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.118109    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.123079    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:23:17.123079    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Audit-Id: 73f9716e-588b-4572-98bf-a3a721435868
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.123079    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.123079    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.123079    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.124466    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e8 eb 03 0a  0a 0a 00 12 04 31 39 38  |ist..........198|
		00000020  35 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |5....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 309986 chars]
	 >
	I0210 12:23:17.126124    5644 system_pods.go:86] 12 kube-system pods found
	I0210 12:23:17.126124    5644 system_pods.go:89] "coredns-668d6bf9bc-w8rr9" [e45a37bf-e7da-4129-bb7e-8be7dbe93e09] Running
	I0210 12:23:17.126124    5644 system_pods.go:89] "etcd-multinode-032400" [26d4110f-9a39-48de-a433-567a75789be0] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-c2mb8" [09de881b-fbc4-4a8f-b8d7-c46dd3f010ad] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-jcmlf" [2b9d8f00-2dd6-42d2-a26d-7ddda6acb204] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kindnet-tv6gk" [f85e1e17-24a8-4e55-bd17-95f9ce89e3ea] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-apiserver-multinode-032400" [9e688aae-09da-4b5c-ba4d-de6aa64cb34e] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-controller-manager-multinode-032400" [c3e50540-dd66-47a0-a433-622df59fb441] Running
	I0210 12:23:17.126186    5644 system_pods.go:89] "kube-proxy-rrh82" [9ad7f281-f022-4f3b-b206-39ce42713cf9] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-proxy-tbtqd" [bdf8cb10-05be-460b-a9c6-bc51ea884268] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-proxy-xltxj" [9a5e58bc-54b1-43b9-a889-0d50d435af83] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "kube-scheduler-multinode-032400" [cfbcbf37-65ce-4b21-9f26-18dafc6e4480] Running
	I0210 12:23:17.126250    5644 system_pods.go:89] "storage-provisioner" [c5a7f602-d41a-4d9c-8fb3-5cf1bb41aca0] Running
	I0210 12:23:17.126250    5644 system_pods.go:126] duration metric: took 8.3427ms to wait for k8s-apps to be running ...
	I0210 12:23:17.126250    5644 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:23:17.134037    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:23:17.157445    5644 system_svc.go:56] duration metric: took 31.1949ms WaitForService to wait for kubelet
	I0210 12:23:17.157445    5644 kubeadm.go:582] duration metric: took 1m13.9102392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:23:17.157445    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:23:17.157445    5644 type.go:204] "Request Body" body=""
	I0210 12:23:17.157445    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:23:17.157445    5644 round_trippers.go:476] Request Headers:
	I0210 12:23:17.157445    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:23:17.157445    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:23:17.161324    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:23:17.161408    5644 round_trippers.go:584] Response Headers:
	I0210 12:23:17.161408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:23:17.161408    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:23:17 GMT
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Audit-Id: ace23cdd-2fc5-4cbf-ad50-eb4ff866d35a
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:23:17.161408    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:23:17.161408    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ad 62 0a  0a 0a 00 12 04 31 39 38  |List..b......198|
		00000020  35 1a 00 12 d4 24 0a f8  11 0a 10 6d 75 6c 74 69  |5....$.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 39 33 35 38 00 42  |1e01b262.19358.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 61299 chars]
	 >
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:23:17.162065    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:23:17.162065    5644 node_conditions.go:105] duration metric: took 4.6198ms to run NodePressure ...
	I0210 12:23:17.162065    5644 start.go:241] waiting for startup goroutines ...
	I0210 12:23:17.162065    5644 start.go:246] waiting for cluster config update ...
	I0210 12:23:17.162065    5644 start.go:255] writing updated cluster config ...
	I0210 12:23:17.168681    5644 out.go:201] 
	I0210 12:23:17.172786    5644 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:23:17.183532    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:23:17.183532    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:17.189676    5644 out.go:177] * Starting "multinode-032400-m02" worker node in "multinode-032400" cluster
	I0210 12:23:17.192084    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:23:17.192084    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:23:17.192084    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:23:17.192084    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:23:17.192084    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:17.195181    5644 start.go:360] acquireMachinesLock for multinode-032400-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:23:17.195285    5644 start.go:364] duration metric: took 103.8µs to acquireMachinesLock for "multinode-032400-m02"
	I0210 12:23:17.195285    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:23:17.195285    5644 fix.go:54] fixHost starting: m02
	I0210 12:23:17.196188    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:19.198848    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:23:19.198924    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:19.198924    5644 fix.go:112] recreateIfNeeded on multinode-032400-m02: state=Stopped err=<nil>
	W0210 12:23:19.198924    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:23:19.209028    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400-m02" ...
	I0210 12:23:19.211192    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400-m02
	I0210 12:23:22.083127    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:22.083152    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:22.083201    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:23:22.083201    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:24.140742    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:24.140742    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:24.140841    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:26.441765    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:26.441765    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:27.442501    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:29.431689    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:29.432150    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:29.432150    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:31.738501    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:31.738501    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:32.739670    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:34.735303    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:37.045458    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:37.045458    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:38.046750    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:40.057433    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:42.326568    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:23:42.326886    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:43.327458    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:45.342394    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:45.342440    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:45.342440    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:47.878260    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:47.878260    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:47.880151    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:49.843034    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:52.188646    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:52.188646    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:52.188646    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:23:52.191023    5644 machine.go:93] provisionDockerMachine start ...
	I0210 12:23:52.191023    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:54.195898    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:23:56.541564    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:23:56.541639    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:56.545563    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:23:56.545850    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:23:56.545850    5644 main.go:141] libmachine: About to run SSH command:
	hostname
	I0210 12:23:56.683557    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0210 12:23:56.683557    5644 buildroot.go:166] provisioning hostname "multinode-032400-m02"
	I0210 12:23:56.683557    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:23:58.663919    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:23:58.664349    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:23:58.664349    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:01.014069    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:01.015071    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:01.019435    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:01.020254    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:01.020254    5644 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-032400-m02 && echo "multinode-032400-m02" | sudo tee /etc/hostname
	I0210 12:24:01.189968    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-032400-m02
	
	I0210 12:24:01.189968    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:03.145362    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:05.477216    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:05.477353    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:05.480851    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:05.481493    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:05.481493    5644 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-032400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-032400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-032400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0210 12:24:05.628216    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0210 12:24:05.628216    5644 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0210 12:24:05.628216    5644 buildroot.go:174] setting up certificates
	I0210 12:24:05.628216    5644 provision.go:84] configureAuth start
	I0210 12:24:05.628216    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:07.643337    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:09.948691    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:09.948802    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:09.948802    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:11.915658    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:11.915713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:11.915713    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:14.276609    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:14.277325    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:14.277325    5644 provision.go:143] copyHostCerts
	I0210 12:24:14.277496    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0210 12:24:14.277708    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0210 12:24:14.277708    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0210 12:24:14.278095    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0210 12:24:14.279089    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0210 12:24:14.279293    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0210 12:24:14.279370    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0210 12:24:14.279596    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1675 bytes)
	I0210 12:24:14.280557    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0210 12:24:14.280904    5644 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0210 12:24:14.280904    5644 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0210 12:24:14.281246    5644 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0210 12:24:14.282054    5644 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-032400-m02 san=[127.0.0.1 172.29.131.248 localhost minikube multinode-032400-m02]
	I0210 12:24:14.642218    5644 provision.go:177] copyRemoteCerts
	I0210 12:24:14.650320    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0210 12:24:14.650320    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:16.615542    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:16.616001    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:16.616114    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:18.962964    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:18.962964    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:18.963767    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:19.076516    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.4261467s)
	I0210 12:24:19.076516    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0210 12:24:19.076516    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0210 12:24:19.122777    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0210 12:24:19.123202    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0210 12:24:19.166902    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0210 12:24:19.166902    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0210 12:24:19.212748    5644 provision.go:87] duration metric: took 13.5843137s to configureAuth
	I0210 12:24:19.212797    5644 buildroot.go:189] setting minikube options for container-runtime
	I0210 12:24:19.213726    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:24:19.213829    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:21.161961    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:21.161961    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:21.162095    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:23.516042    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:23.516042    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:23.521196    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:23.521696    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:23.521696    5644 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0210 12:24:23.664239    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0210 12:24:23.664367    5644 buildroot.go:70] root file system type: tmpfs
	I0210 12:24:23.664464    5644 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0210 12:24:23.664464    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:25.600640    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:27.960913    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:27.960913    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:27.964881    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:27.965496    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:27.965496    5644 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.129.181"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0210 12:24:28.125594    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.129.181
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0210 12:24:28.125594    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:30.098342    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:30.098342    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:30.098516    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:32.444001    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:32.444001    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:32.447444    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:32.448225    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:32.448300    5644 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0210 12:24:34.777404    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0210 12:24:34.777404    5644 machine.go:96] duration metric: took 42.5859079s to provisionDockerMachine
	I0210 12:24:34.777951    5644 start.go:293] postStartSetup for "multinode-032400-m02" (driver="hyperv")
	I0210 12:24:34.777951    5644 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0210 12:24:34.786105    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0210 12:24:34.786105    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:36.697243    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:36.698259    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:36.698357    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:39.033911    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:39.033911    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:39.033911    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:39.151164    5644 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.3649569s)
	I0210 12:24:39.160325    5644 ssh_runner.go:195] Run: cat /etc/os-release
	I0210 12:24:39.169851    5644 command_runner.go:130] > NAME=Buildroot
	I0210 12:24:39.169851    5644 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0210 12:24:39.169851    5644 command_runner.go:130] > ID=buildroot
	I0210 12:24:39.169851    5644 command_runner.go:130] > VERSION_ID=2023.02.9
	I0210 12:24:39.169851    5644 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0210 12:24:39.169851    5644 info.go:137] Remote host: Buildroot 2023.02.9
	I0210 12:24:39.169851    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0210 12:24:39.170468    5644 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0210 12:24:39.170614    5644 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> 117642.pem in /etc/ssl/certs
	I0210 12:24:39.170614    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /etc/ssl/certs/117642.pem
	I0210 12:24:39.183845    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0210 12:24:39.202537    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /etc/ssl/certs/117642.pem (1708 bytes)
	I0210 12:24:39.249074    5644 start.go:296] duration metric: took 4.471073s for postStartSetup
	I0210 12:24:39.249074    5644 fix.go:56] duration metric: took 1m22.0528783s for fixHost
	I0210 12:24:39.249074    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:41.226969    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:41.227236    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:41.227236    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:43.586713    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:43.586713    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:43.592978    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:43.593664    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:43.593664    5644 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0210 12:24:43.727060    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: 1739190283.741836729
	
	I0210 12:24:43.727060    5644 fix.go:216] guest clock: 1739190283.741836729
	I0210 12:24:43.727060    5644 fix.go:229] Guest: 2025-02-10 12:24:43.741836729 +0000 UTC Remote: 2025-02-10 12:24:39.2490741 +0000 UTC m=+281.750914501 (delta=4.492762629s)
	I0210 12:24:43.727060    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:45.724935    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:45.724935    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:45.725736    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:48.106037    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:48.106037    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:48.109940    5644 main.go:141] libmachine: Using SSH client type: native
	I0210 12:24:48.110418    5644 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xde6ba0] 0xde96e0 <nil>  [] 0s} 172.29.131.248 22 <nil> <nil>}
	I0210 12:24:48.110418    5644 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1739190283
	I0210 12:24:48.254581    5644 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Feb 10 12:24:43 UTC 2025
	
	I0210 12:24:48.254581    5644 fix.go:236] clock set: Mon Feb 10 12:24:43 UTC 2025
	 (err=<nil>)
	I0210 12:24:48.254581    5644 start.go:83] releasing machines lock for "multinode-032400-m02", held for 1m31.0582855s
	I0210 12:24:48.254581    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:50.249813    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:50.249813    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:50.250519    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:52.657496    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:52.657496    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:52.660492    5644 out.go:177] * Found network options:
	I0210 12:24:52.662856    5644 out.go:177]   - NO_PROXY=172.29.129.181
	W0210 12:24:52.665365    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:24:52.667392    5644 out.go:177]   - NO_PROXY=172.29.129.181
	W0210 12:24:52.669599    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	W0210 12:24:52.670616    5644 proxy.go:119] fail to check proxy env: Error ip not in block
	I0210 12:24:52.672214    5644 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0210 12:24:52.672214    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:52.679577    5644 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0210 12:24:52.679577    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:24:54.731498    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:54.731498    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:54.731578    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:54.742483    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:24:57.164963    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:57.165122    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:57.165464    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:57.187790    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:24:57.188041    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:24:57.188374    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:24:57.258869    5644 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0210 12:24:57.258869    5644 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.5866035s)
	W0210 12:24:57.258869    5644 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0210 12:24:57.278221    5644 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0210 12:24:57.278860    5644 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5992312s)
	W0210 12:24:57.278860    5644 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0210 12:24:57.287302    5644 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0210 12:24:57.317125    5644 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0210 12:24:57.317125    5644 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0210 12:24:57.317229    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:24:57.317229    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:24:57.350868    5644 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0210 12:24:57.358506    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0210 12:24:57.375543    5644 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0210 12:24:57.375605    5644 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0210 12:24:57.395962    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0210 12:24:57.415892    5644 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0210 12:24:57.423871    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0210 12:24:57.449590    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:24:57.481964    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0210 12:24:57.509276    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0210 12:24:57.536583    5644 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0210 12:24:57.563168    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0210 12:24:57.593200    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0210 12:24:57.620991    5644 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0210 12:24:57.653609    5644 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0210 12:24:57.670590    5644 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:24:57.670590    5644 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0210 12:24:57.682043    5644 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0210 12:24:57.717472    5644 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0210 12:24:57.740341    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:24:57.920866    5644 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0210 12:24:57.952087    5644 start.go:495] detecting cgroup driver to use...
	I0210 12:24:57.959342    5644 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0210 12:24:57.980137    5644 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Unit]
	I0210 12:24:57.980137    5644 command_runner.go:130] > Description=Docker Application Container Engine
	I0210 12:24:57.980137    5644 command_runner.go:130] > Documentation=https://docs.docker.com
	I0210 12:24:57.980137    5644 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0210 12:24:57.980137    5644 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0210 12:24:57.980137    5644 command_runner.go:130] > StartLimitBurst=3
	I0210 12:24:57.980137    5644 command_runner.go:130] > StartLimitIntervalSec=60
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Service]
	I0210 12:24:57.980137    5644 command_runner.go:130] > Type=notify
	I0210 12:24:57.980137    5644 command_runner.go:130] > Restart=on-failure
	I0210 12:24:57.980137    5644 command_runner.go:130] > Environment=NO_PROXY=172.29.129.181
	I0210 12:24:57.980137    5644 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0210 12:24:57.980137    5644 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0210 12:24:57.980137    5644 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0210 12:24:57.980137    5644 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0210 12:24:57.980137    5644 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0210 12:24:57.980137    5644 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0210 12:24:57.980137    5644 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecStart=
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0210 12:24:57.980137    5644 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0210 12:24:57.980137    5644 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitNOFILE=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitNPROC=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > LimitCORE=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0210 12:24:57.980137    5644 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0210 12:24:57.980137    5644 command_runner.go:130] > TasksMax=infinity
	I0210 12:24:57.980137    5644 command_runner.go:130] > TimeoutStartSec=0
	I0210 12:24:57.980137    5644 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0210 12:24:57.980137    5644 command_runner.go:130] > Delegate=yes
	I0210 12:24:57.980137    5644 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0210 12:24:57.980137    5644 command_runner.go:130] > KillMode=process
	I0210 12:24:57.980137    5644 command_runner.go:130] > [Install]
	I0210 12:24:57.980137    5644 command_runner.go:130] > WantedBy=multi-user.target
	I0210 12:24:57.989468    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:24:58.016992    5644 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0210 12:24:58.055601    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0210 12:24:58.089433    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:24:58.125959    5644 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0210 12:24:58.187663    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0210 12:24:58.211671    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0210 12:24:58.245861    5644 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0210 12:24:58.256595    5644 ssh_runner.go:195] Run: which cri-dockerd
	I0210 12:24:58.262484    5644 command_runner.go:130] > /usr/bin/cri-dockerd
	I0210 12:24:58.269967    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0210 12:24:58.287861    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0210 12:24:58.326343    5644 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0210 12:24:58.514534    5644 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0210 12:24:58.720409    5644 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0210 12:24:58.720409    5644 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0210 12:24:58.767420    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:24:58.952672    5644 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0210 12:25:01.611230    5644 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.658486s)
	I0210 12:25:01.619503    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0210 12:25:01.650291    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:25:01.683544    5644 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0210 12:25:01.871163    5644 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0210 12:25:02.069288    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:02.255320    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0210 12:25:02.293599    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0210 12:25:02.326826    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:02.527366    5644 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0210 12:25:02.634096    5644 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0210 12:25:02.643249    5644 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0210 12:25:02.651571    5644 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0210 12:25:02.651689    5644 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0210 12:25:02.651689    5644 command_runner.go:130] > Device: 0,22	Inode: 853         Links: 1
	I0210 12:25:02.651689    5644 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0210 12:25:02.651689    5644 command_runner.go:130] > Access: 2025-02-10 12:25:02.569124993 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] > Modify: 2025-02-10 12:25:02.569124993 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] > Change: 2025-02-10 12:25:02.573125009 +0000
	I0210 12:25:02.651754    5644 command_runner.go:130] >  Birth: -
	I0210 12:25:02.651903    5644 start.go:563] Will wait 60s for crictl version
	I0210 12:25:02.663192    5644 ssh_runner.go:195] Run: which crictl
	I0210 12:25:02.669653    5644 command_runner.go:130] > /usr/bin/crictl
	I0210 12:25:02.678491    5644 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0210 12:25:02.728521    5644 command_runner.go:130] > Version:  0.1.0
	I0210 12:25:02.728521    5644 command_runner.go:130] > RuntimeName:  docker
	I0210 12:25:02.728521    5644 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0210 12:25:02.728653    5644 command_runner.go:130] > RuntimeApiVersion:  v1
	I0210 12:25:02.728653    5644 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0210 12:25:02.735209    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:25:02.768431    5644 command_runner.go:130] > 27.4.0
	I0210 12:25:02.778520    5644 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0210 12:25:02.809258    5644 command_runner.go:130] > 27.4.0
	I0210 12:25:02.814420    5644 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0210 12:25:02.816674    5644 out.go:177]   - env NO_PROXY=172.29.129.181
	I0210 12:25:02.818497    5644 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0210 12:25:02.822632    5644 ip.go:211] Found interface: {Index:6 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:8b:81:3d Flags:up|broadcast|multicast|running}
	I0210 12:25:02.825115    5644 ip.go:214] interface addr: fe80::b840:883f:e0df:bdfd/64
	I0210 12:25:02.825115    5644 ip.go:214] interface addr: 172.29.128.1/20
	I0210 12:25:02.835018    5644 ssh_runner.go:195] Run: grep 172.29.128.1	host.minikube.internal$ /etc/hosts
	I0210 12:25:02.841330    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:25:02.862035    5644 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:25:02.862699    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:02.862900    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:04.804561    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:04.804561    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:04.804561    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:04.804561    5644 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400 for IP: 172.29.131.248
	I0210 12:25:04.804561    5644 certs.go:194] generating shared ca certs ...
	I0210 12:25:04.804561    5644 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 12:25:04.806473    5644 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0210 12:25:04.807010    5644 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0210 12:25:04.807260    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0210 12:25:04.807509    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0210 12:25:04.807719    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0210 12:25:04.807827    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0210 12:25:04.808418    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem (1338 bytes)
	W0210 12:25:04.808812    5644 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764_empty.pem, impossibly tiny 0 bytes
	I0210 12:25:04.808920    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0210 12:25:04.809297    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0210 12:25:04.809661    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0210 12:25:04.809942    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0210 12:25:04.810493    5644 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem (1708 bytes)
	I0210 12:25:04.810867    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:04.811091    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem -> /usr/share/ca-certificates/11764.pem
	I0210 12:25:04.811257    5644 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem -> /usr/share/ca-certificates/117642.pem
	I0210 12:25:04.811534    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0210 12:25:04.861560    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0210 12:25:04.910911    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0210 12:25:04.959161    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0210 12:25:05.004438    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0210 12:25:05.048411    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\11764.pem --> /usr/share/ca-certificates/11764.pem (1338 bytes)
	I0210 12:25:05.091405    5644 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\117642.pem --> /usr/share/ca-certificates/117642.pem (1708 bytes)
	I0210 12:25:05.144508    5644 ssh_runner.go:195] Run: openssl version
	I0210 12:25:05.152921    5644 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0210 12:25:05.161065    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0210 12:25:05.188558    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.195514    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.195514    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.204912    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0210 12:25:05.212912    5644 command_runner.go:130] > b5213941
	I0210 12:25:05.220878    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0210 12:25:05.248708    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11764.pem && ln -fs /usr/share/ca-certificates/11764.pem /etc/ssl/certs/11764.pem"
	I0210 12:25:05.276086    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.281881    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.281881    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:40 /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.290375    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11764.pem
	I0210 12:25:05.299808    5644 command_runner.go:130] > 51391683
	I0210 12:25:05.306556    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11764.pem /etc/ssl/certs/51391683.0"
	I0210 12:25:05.334484    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/117642.pem && ln -fs /usr/share/ca-certificates/117642.pem /etc/ssl/certs/117642.pem"
	I0210 12:25:05.360534    5644 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.367955    5644 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.367955    5644 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:40 /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.376081    5644 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/117642.pem
	I0210 12:25:05.384140    5644 command_runner.go:130] > 3ec20f2e
	I0210 12:25:05.392100    5644 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/117642.pem /etc/ssl/certs/3ec20f2e.0"
	I0210 12:25:05.418949    5644 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0210 12:25:05.425292    5644 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:25:05.425483    5644 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0210 12:25:05.425483    5644 kubeadm.go:934] updating node {m02 172.29.131.248 8443 v1.32.1 docker false true} ...
	I0210 12:25:05.425483    5644 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-032400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.131.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0210 12:25:05.433682    5644 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubeadm
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubectl
	I0210 12:25:05.450443    5644 command_runner.go:130] > kubelet
	I0210 12:25:05.450443    5644 binaries.go:44] Found k8s binaries, skipping transfer
	I0210 12:25:05.458809    5644 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0210 12:25:05.475912    5644 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0210 12:25:05.506609    5644 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0210 12:25:05.546168    5644 ssh_runner.go:195] Run: grep 172.29.129.181	control-plane.minikube.internal$ /etc/hosts
	I0210 12:25:05.551914    5644 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.129.181	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0210 12:25:05.584294    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:05.782369    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:25:05.808315    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:05.809125    5644 start.go:317] joinCluster: &{Name:multinode-032400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-032400 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.129.181 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.129.10 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 12:25:05.809248    5644 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:05.809357    5644 host.go:66] Checking if "multinode-032400-m02" exists ...
	I0210 12:25:05.809819    5644 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:25:05.810372    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:05.810807    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:07.823836    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:07.824375    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:07.824375    5644 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:25:07.824860    5644 api_server.go:166] Checking apiserver status ...
	I0210 12:25:07.834305    5644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:25:07.834397    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:09.807469    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:12.158950    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:25:12.159907    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:12.159907    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:25:12.280626    5644 command_runner.go:130] > 2008
	I0210 12:25:12.280626    5644 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.4462345s)
	I0210 12:25:12.290347    5644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup
	W0210 12:25:12.310104    5644 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2008/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:25:12.318612    5644 ssh_runner.go:195] Run: ls
	I0210 12:25:12.325212    5644 api_server.go:253] Checking apiserver healthz at https://172.29.129.181:8443/healthz ...
	I0210 12:25:12.332198    5644 api_server.go:279] https://172.29.129.181:8443/healthz returned 200:
	ok
	I0210 12:25:12.339200    5644 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-032400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0210 12:25:12.513450    5644 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-tv6gk, kube-system/kube-proxy-xltxj
	I0210 12:25:15.532801    5644 command_runner.go:130] > node/multinode-032400-m02 cordoned
	I0210 12:25:15.532801    5644 command_runner.go:130] > pod "busybox-58667487b6-4g8jw" has DeletionTimestamp older than 1 seconds, skipping
	I0210 12:25:15.532801    5644 command_runner.go:130] > node/multinode-032400-m02 drained
	I0210 12:25:15.532912    5644 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl drain multinode-032400-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1936772s)
	I0210 12:25:15.532912    5644 node.go:128] successfully drained node "multinode-032400-m02"
	I0210 12:25:15.532912    5644 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0210 12:25:15.533097    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:25:17.481645    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:17.482575    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:17.482732    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:19.820235    5644 main.go:141] libmachine: [stdout =====>] : 172.29.131.248
	
	I0210 12:25:19.820235    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:19.820235    5644 sshutil.go:53] new ssh client: &{IP:172.29.131.248 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:25:20.249336    5644 command_runner.go:130] ! W0210 12:25:20.264957    1673 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0210 12:25:20.442489    5644 command_runner.go:130] ! W0210 12:25:20.458057    1673 cleanupnode.go:105] [reset] Failed to remove containers: failed to stop running pod f267a1e310221fa8fbfbcd980a9fc281a6f751038e4108cbe85aa524b948addc: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "busybox-58667487b6-4g8jw_default" network: cni config uninitialized
	I0210 12:25:20.460040    5644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Stopping the kubelet service
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0210 12:25:20.460118    5644 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0210 12:25:20.460215    5644 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0210 12:25:20.460215    5644 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0210 12:25:20.460254    5644 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0210 12:25:20.460254    5644 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0210 12:25:20.460254    5644 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0210 12:25:20.460254    5644 command_runner.go:130] > to reset your system's IPVS tables.
	I0210 12:25:20.460254    5644 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0210 12:25:20.460254    5644 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0210 12:25:20.460254    5644 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.9272875s)
	I0210 12:25:20.460254    5644 node.go:155] successfully reset node "multinode-032400-m02"
	I0210 12:25:20.461538    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:25:20.461844    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:25:20.463341    5644 cert_rotation.go:140] Starting client certificate rotation controller
	I0210 12:25:20.463809    5644 type.go:296] "Request Body" body=<
		00000000  6b 38 73 00 0a 13 0a 02  76 31 12 0d 44 65 6c 65  |k8s.....v1..Dele|
		00000010  74 65 4f 70 74 69 6f 6e  73 12 00 1a 00 22 00     |teOptions....".|
	 >
	I0210 12:25:20.463886    5644 round_trippers.go:470] DELETE https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:20.463969    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:20.463984    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:20.464008    5644 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:20.464008    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:20.481499    5644 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0210 12:25:20.481499    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:20.481499    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:20.481499    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Content-Length: 120
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:20 GMT
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Audit-Id: 66d6adfd-6ae5-4dd1-8efe-5fffcc792a37
	I0210 12:25:20.481499    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:20.481499    5644 type.go:296] "Response Body" body=<
		00000000  6b 38 73 00 0a 0c 0a 02  76 31 12 06 53 74 61 74  |k8s.....v1..Stat|
		00000010  75 73 12 60 0a 06 0a 00  12 00 1a 00 12 07 53 75  |us.`..........Su|
		00000020  63 63 65 73 73 1a 00 22  00 2a 47 0a 14 6d 75 6c  |ccess..".*G..mul|
		00000030  74 69 6e 6f 64 65 2d 30  33 32 34 30 30 2d 6d 30  |tinode-032400-m0|
		00000040  32 12 00 1a 05 6e 6f 64  65 73 28 00 32 24 62 30  |2....nodes(.2$b0|
		00000050  35 36 31 63 32 32 2d 64  62 66 32 2d 34 32 61 30  |561c22-dbf2-42a0|
		00000060  2d 62 64 66 33 2d 34 65  30 61 62 37 61 39 61 66  |-bdf3-4e0ab7a9af|
		00000070  30 65 30 00 1a 00 22 00                           |0e0...".|
	 >
	I0210 12:25:20.481499    5644 node.go:180] successfully deleted node "multinode-032400-m02"
	I0210 12:25:20.481499    5644 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:20.481499    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0210 12:25:20.481499    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:25:22.402208    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:22.402208    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:22.402305    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:24.737098    5644 main.go:141] libmachine: [stdout =====>] : 172.29.129.181
	
	I0210 12:25:24.737098    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:24.737322    5644 sshutil.go:53] new ssh client: &{IP:172.29.129.181 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:25:25.136064    5644 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 
	I0210 12:25:25.137199    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.655607s)
	I0210 12:25:25.137281    5644 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:25.137338    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02"
	I0210 12:25:25.319012    5644 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0210 12:25:27.176195    5644 command_runner.go:130] > [preflight] Running pre-flight checks
	I0210 12:25:27.176288    5644 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0210 12:25:27.176288    5644 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0210 12:25:27.176288    5644 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001636368s
	I0210 12:25:27.176362    5644 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0210 12:25:27.176362    5644 command_runner.go:130] > This node has joined the cluster:
	I0210 12:25:27.176362    5644 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0210 12:25:27.176362    5644 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0210 12:25:27.176362    5644 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0210 12:25:27.176435    5644 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 9oxbx9.e90xsnn2uus4mtns --discovery-token-ca-cert-hash sha256:1b8cb99781302ec691f777094c36ac43bdecc74e7ca118c5fbf40794f47c93c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-032400-m02": (2.0390169s)
	I0210 12:25:27.176435    5644 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0210 12:25:27.405789    5644 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0210 12:25:27.606422    5644 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-032400-m02 minikube.k8s.io/updated_at=2025_02_10T12_25_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160 minikube.k8s.io/name=multinode-032400 minikube.k8s.io/primary=false
	I0210 12:25:27.731391    5644 command_runner.go:130] > node/multinode-032400-m02 labeled
	I0210 12:25:27.731471    5644 start.go:319] duration metric: took 21.9221029s to joinCluster
	I0210 12:25:27.731679    5644 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.131.248 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0210 12:25:27.731834    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:27.735110    5644 out.go:177] * Verifying Kubernetes components...
	I0210 12:25:27.745144    5644 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0210 12:25:27.940918    5644 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0210 12:25:27.966711    5644 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 12:25:27.967147    5644 kapi.go:59] client config for multinode-032400: &rest.Config{Host:"https://172.29.129.181:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-032400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2a2d300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0210 12:25:27.967869    5644 node_ready.go:35] waiting up to 6m0s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:25:27.967999    5644 type.go:168] "Request Body" body=""
	I0210 12:25:27.968087    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:27.968087    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:27.968087    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:27.968139    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:27.972126    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:27.972126    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:27.972126    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:27 GMT
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Audit-Id: ec8f6606-8dc0-4cc9-bb6f-d3d7d465f067
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:27.972126    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:27.972215    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:27.972300    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:28.468102    5644 type.go:168] "Request Body" body=""
	I0210 12:25:28.468102    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:28.468102    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:28.468102    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:28.468102    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:28.472832    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:28.472977    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:28 GMT
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Audit-Id: e52cb391-a49f-459b-8c42-c0be76a90d4c
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:28.472977    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:28.472977    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:28.472977    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:28.473258    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:28.968500    5644 type.go:168] "Request Body" body=""
	I0210 12:25:28.968994    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:28.968994    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:28.968994    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:28.968994    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:28.985765    5644 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0210 12:25:28.985765    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Audit-Id: ca971a1a-fc40-47c5-a0ce-1edffed8631a
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:28.985765    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:28.985765    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:28.985765    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:28.985765    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.468760    5644 type.go:168] "Request Body" body=""
	I0210 12:25:29.469059    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:29.469059    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:29.469059    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:29.469121    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:29.475962    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:29.475962    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Audit-Id: bfa2cc92-71f4-4a96-a592-67e401efbe79
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:29.475962    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:29.475962    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:29.475962    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:29.475962    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.969144    5644 type.go:168] "Request Body" body=""
	I0210 12:25:29.969144    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:29.969144    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:29.969144    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:29.969144    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:29.973489    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:29.973489    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Audit-Id: f5059816-4599-4ab2-93f8-779836c763dc
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:29.973489    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:29.973489    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:29.973489    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:29 GMT
	I0210 12:25:29.973489    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:29.973489    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:30.468684    5644 type.go:168] "Request Body" body=""
	I0210 12:25:30.468684    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:30.468684    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:30.468684    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:30.468684    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:30.472638    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:30.472754    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:30 GMT
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Audit-Id: 385ffe75-f36e-4aad-9ec3-ec5568bf9e6a
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:30.472754    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:30.472754    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:30.472754    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:30.472852    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:30.473247    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:30.968272    5644 type.go:168] "Request Body" body=""
	I0210 12:25:30.968272    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:30.968272    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:30.968272    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:30.968272    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:30.972445    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:30.972738    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:30.972738    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:30 GMT
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Audit-Id: 5598775f-26e3-4733-876e-f6e15bb479de
	I0210 12:25:30.972738    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:30.972788    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:30.972788    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:30.973003    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:31.468362    5644 type.go:168] "Request Body" body=""
	I0210 12:25:31.468362    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:31.468362    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:31.468362    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:31.468362    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:31.472812    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:31.472906    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:31.472906    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:31.472906    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Content-Length: 3271
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:31 GMT
	I0210 12:25:31.472906    5644 round_trippers.go:587]     Audit-Id: 3feb5aa6-1208-49f6-af89-8ad0cfe8d7ee
	I0210 12:25:31.473159    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 b0 19 0a 9d 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 32 35 38 00  |a254f32b2.21258.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15162 chars]
	 >
	I0210 12:25:31.968731    5644 type.go:168] "Request Body" body=""
	I0210 12:25:31.968731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:31.968731    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:31.968731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:31.968731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:31.976180    5644 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0210 12:25:31.976180    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:31.976180    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:31.976180    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:31.976180    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:31 GMT
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Audit-Id: ff255298-da3f-474b-8ceb-a26688f92f1a
	I0210 12:25:31.976707    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:31.976793    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:31.976910    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:32.468260    5644 type.go:168] "Request Body" body=""
	I0210 12:25:32.468260    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:32.468260    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:32.468260    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:32.468260    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:32.473134    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:32.473237    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:32 GMT
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Audit-Id: 444e57c4-3937-426d-8509-866e57f75bc1
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:32.473237    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:32.473237    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:32.473237    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:32.473503    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:32.968917    5644 type.go:168] "Request Body" body=""
	I0210 12:25:32.968917    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:32.968917    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:32.968917    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:32.968917    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:32.975300    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:32.975300    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:32 GMT
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Audit-Id: 4d3d5a16-c3fe-4bd3-9fff-775b24aa34af
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:32.975300    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:32.975300    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:32.975300    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:32.975300    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:33.469901    5644 type.go:168] "Request Body" body=""
	I0210 12:25:33.469901    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:33.469901    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:33.469901    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:33.469901    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:33.473734    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:33.473734    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:33 GMT
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Audit-Id: 99326657-b3e1-4317-bb43-38d0f74eef4a
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:33.473734    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:33.473734    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:33.473734    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:33.473734    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:33.968429    5644 type.go:168] "Request Body" body=""
	I0210 12:25:33.968429    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:33.968429    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:33.968429    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:33.968429    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:33.972810    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:33.972810    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Audit-Id: 5968d8f6-fee4-48de-bcfb-5a3477685d7e
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:33.972810    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:33.972810    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:33.972810    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:33 GMT
	I0210 12:25:33.972810    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:34.468157    5644 type.go:168] "Request Body" body=""
	I0210 12:25:34.468157    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:34.468157    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:34.468157    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:34.468157    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:34.471700    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:34.471700    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:34 GMT
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Audit-Id: e0ea628a-a218-4ea5-a8ac-b3c767955de4
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:34.472642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:34.472642    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:34.472642    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:34.472900    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:34.473074    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:34.968504    5644 type.go:168] "Request Body" body=""
	I0210 12:25:34.968504    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:34.968504    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:34.968504    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:34.968504    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:34.972797    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:34.972797    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:34.972797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:34 GMT
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Audit-Id: edd22e2e-a071-49cf-847b-8645164df5ed
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:34.972797    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:34.972797    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:34.972797    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:35.468808    5644 type.go:168] "Request Body" body=""
	I0210 12:25:35.468808    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:35.468808    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:35.468808    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:35.468808    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:35.472943    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:35.473026    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:35.473026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:35.473026    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:35.473026    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:35 GMT
	I0210 12:25:35.473095    5644 round_trippers.go:587]     Audit-Id: 6eb2bcbc-3dcb-4bf2-8310-9bc6ce4f8e33
	I0210 12:25:35.473189    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:35.969161    5644 type.go:168] "Request Body" body=""
	I0210 12:25:35.969303    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:35.969303    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:35.969303    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:35.969303    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:35.972971    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:35.973055    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:35.973055    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:35.973055    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:35.973055    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:35.973055    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:35 GMT
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Audit-Id: f889ee21-9b3e-4dfe-a5fc-7a1b5e9503f7
	I0210 12:25:35.973133    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:35.973324    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:36.468222    5644 type.go:168] "Request Body" body=""
	I0210 12:25:36.469160    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:36.469160    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:36.469160    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:36.469160    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:36.475916    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:36.475916    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:36.475916    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:36.475916    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Content-Length: 3341
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:36 GMT
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Audit-Id: 092f2b04-33f1-490c-b259-835c77776041
	I0210 12:25:36.475916    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:36.475916    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f6 19 0a ab 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 34 39 38 00  |a254f32b2.21498.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 15484 chars]
	 >
	I0210 12:25:36.475916    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:36.968598    5644 type.go:168] "Request Body" body=""
	I0210 12:25:36.968598    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:36.968598    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:36.968598    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:36.968598    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:36.971597    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:36.971597    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Audit-Id: 5a976e2e-734a-4e84-b513-f95752a0b998
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:36.971597    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:36.971597    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:36.971597    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:36 GMT
	I0210 12:25:36.971597    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:37.468513    5644 type.go:168] "Request Body" body=""
	I0210 12:25:37.468855    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:37.468855    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:37.468855    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:37.468943    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:37.472136    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:37.472219    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:37.472219    5644 round_trippers.go:587]     Audit-Id: fbc98f06-918e-4376-b265-cf422f355dde
	I0210 12:25:37.472219    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:37.472298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:37.472298    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:37.472298    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:37 GMT
	I0210 12:25:37.472429    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:37.968936    5644 type.go:168] "Request Body" body=""
	I0210 12:25:37.969255    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:37.969255    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:37.969255    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:37.969255    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:37.973333    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:37.973333    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Audit-Id: 3a737390-82cb-49f5-b856-26eb0ce4591f
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:37.973333    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:37.973333    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:37.973333    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:37 GMT
	I0210 12:25:37.973333    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.468113    5644 type.go:168] "Request Body" body=""
	I0210 12:25:38.468113    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:38.468113    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:38.468113    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:38.468113    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:38.472316    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:38.472316    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Audit-Id: 7150760b-047a-45bd-9768-f8b28cfbb768
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:38.472316    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:38.472316    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:38.472316    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:38 GMT
	I0210 12:25:38.472611    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.968707    5644 type.go:168] "Request Body" body=""
	I0210 12:25:38.969108    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:38.969284    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:38.969284    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:38.969341    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:38.973203    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:38.973203    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:38.973203    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:38.973203    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:38 GMT
	I0210 12:25:38.973203    5644 round_trippers.go:587]     Audit-Id: e1503f7d-d24e-437f-950d-20527a16cf58
	I0210 12:25:38.973203    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:38.973203    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:39.468644    5644 type.go:168] "Request Body" body=""
	I0210 12:25:39.469068    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:39.469146    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:39.469146    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:39.469146    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:39.472435    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:39.472435    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:39.472435    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:39 GMT
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Audit-Id: 72f95c0c-50dc-4102-9b67-ed24d17ec47a
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:39.472564    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:39.472564    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:39.472834    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:39.968210    5644 type.go:168] "Request Body" body=""
	I0210 12:25:39.968210    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:39.968210    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:39.968210    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:39.968210    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:39.972687    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:39.972793    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:39.972793    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:39.972793    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:39 GMT
	I0210 12:25:39.972793    5644 round_trippers.go:587]     Audit-Id: 555096bf-1701-4e35-b4c8-18d985aa6672
	I0210 12:25:39.973055    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.469069    5644 type.go:168] "Request Body" body=""
	I0210 12:25:40.469069    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:40.469069    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:40.469069    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:40.469069    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:40.473465    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:40.474088    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:40.474136    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:40.474136    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:40 GMT
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Audit-Id: d81e68f2-9015-4087-885a-4081184acbcd
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:40.474162    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:40.474162    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:40.474162    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:40.474162    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.968916    5644 type.go:168] "Request Body" body=""
	I0210 12:25:40.968916    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:40.968916    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:40.968916    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:40.968916    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:40.974360    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:40.974360    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:40.974360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:40 GMT
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Audit-Id: 91948b9e-e05a-4292-ab26-ae4450c54e2b
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:40.974360    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:40.974360    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:40.974360    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:40.974360    5644 node_ready.go:53] node "multinode-032400-m02" has status "Ready":"False"
	I0210 12:25:41.469327    5644 type.go:168] "Request Body" body=""
	I0210 12:25:41.469462    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:41.469462    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:41.469462    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:41.469462    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:41.473521    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:41.473573    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:41.473573    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:41.473606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:41.473606    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:41 GMT
	I0210 12:25:41.473606    5644 round_trippers.go:587]     Audit-Id: 4c090d5a-7ac2-4db4-ae8b-a72c7c769ded
	I0210 12:25:41.473870    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:41.968582    5644 type.go:168] "Request Body" body=""
	I0210 12:25:41.968582    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:41.968582    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:41.968582    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:41.968582    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:41.973019    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:41.973019    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:41 GMT
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Audit-Id: 54485fea-1bf0-4461-a108-b431ac8cf56d
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:41.973019    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:41.973019    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:41.973019    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:41.973019    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:42.469185    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.469185    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:42.469185    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.469185    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.469185    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.474290    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:42.474372    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.474372    5644 round_trippers.go:587]     Audit-Id: 6874f952-1555-4448-9fb7-7c8ec6229517
	I0210 12:25:42.474372    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.474447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.474447    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Content-Length: 3642
	I0210 12:25:42.474447    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:42 GMT
	I0210 12:25:42.474697    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a3 1c 0a fa 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 35 37 38 00  |a254f32b2.21578.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16982 chars]
	 >
	I0210 12:25:42.968281    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.968645    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:42.968645    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.968645    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.968645    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.972898    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:42.972963    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Audit-Id: c4748307-b742-4d77-b863-2b8a72431791
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.972963    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.973020    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.973020    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.973020    5644 round_trippers.go:587]     Content-Length: 3520
	I0210 12:25:42.973020    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:42 GMT
	I0210 12:25:42.973277    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a9 1b 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 36 37 38 00  |a254f32b2.21678.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16356 chars]
	 >
	I0210 12:25:42.973405    5644 node_ready.go:49] node "multinode-032400-m02" has status "Ready":"True"
	I0210 12:25:42.973405    5644 node_ready.go:38] duration metric: took 15.005332s for node "multinode-032400-m02" to be "Ready" ...
	I0210 12:25:42.973405    5644 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:25:42.973491    5644 type.go:204] "Request Body" body=""
	I0210 12:25:42.973556    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods
	I0210 12:25:42.973556    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.973556    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.973627    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.979691    5644 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0210 12:25:42.979691    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.979691    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.979691    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Audit-Id: fa8f912b-2df1-4f6f-92df-58bc5b39c417
	I0210 12:25:42.979691    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.981851    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ea e9 03 0a  0a 0a 00 12 04 32 31 36  |ist..........216|
		00000020  38 1a 00 12 c5 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 77 38  |ns-668d6bf9bc-w8|
		00000040  72 72 39 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |rr9..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 65 34 35 61 33 37 62  |ystem".*$e45a37b|
		00000070  66 2d 65 37 64 61 2d 34  31 32 39 2d 62 62 37 65  |f-e7da-4129-bb7e|
		00000080  2d 38 62 65 37 64 62 65  39 33 65 30 39 32 04 31  |-8be7dbe93e092.1|
		00000090  39 37 32 38 00 42 08 08  92 d4 a7 bd 06 10 00 5a  |9728.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308724 chars]
	 >
	I0210 12:25:42.982555    5644 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.982555    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.982555    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-w8rr9
	I0210 12:25:42.982555    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.982555    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.982555    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.985730    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.985730    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Audit-Id: 37517305-d14b-40b1-a6f6-9e4d707a3892
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.985730    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.985730    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.985730    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.985730    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c5 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 77 38 72 72 39 12  |68d6bf9bc-w8rr9.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 65 34 35  61 33 37 62 66 2d 65 37  |m".*$e45a37bf-e7|
		00000060  64 61 2d 34 31 32 39 2d  62 62 37 65 2d 38 62 65  |da-4129-bb7e-8be|
		00000070  37 64 62 65 39 33 65 30  39 32 04 31 39 37 32 38  |7dbe93e092.19728|
		00000080  00 42 08 08 92 d4 a7 bd  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24725 chars]
	 >
	I0210 12:25:42.986731    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.986731    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:42.986731    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.986731    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.986731    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.989755    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.989800    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.989800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Audit-Id: 28e681d0-4883-4b07-8113-92d25c9082de
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.989800    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.989800    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.990208    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:42.990384    5644 pod_ready.go:93] pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:42.990384    5644 pod_ready.go:82] duration metric: took 7.8281ms for pod "coredns-668d6bf9bc-w8rr9" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.990425    5644 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.990499    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.990571    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-032400
	I0210 12:25:42.990603    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.990603    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.990603    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.992352    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:25:42.992352    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.992352    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Audit-Id: 5d3e1470-0d15-4d06-a8af-615f4c71ea0b
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.992352    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.992352    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.992352    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  81 2c 0a 9f 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.,.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 30 33  32 34 30 30 12 00 1a 0b  |inode-032400....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 32  |kube-system".*$2|
		00000040  36 64 34 31 31 30 66 2d  39 61 33 39 2d 34 38 64  |6d4110f-9a39-48d|
		00000050  65 2d 61 34 33 33 2d 35  36 37 61 37 35 37 38 39  |e-a433-567a75789|
		00000060  62 65 30 32 04 31 38 37  30 38 00 42 08 08 e6 de  |be02.18708.B....|
		00000070  a7 bd 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4f 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |O.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26933 chars]
	 >
	I0210 12:25:42.992352    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.992352    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:42.992352    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.992352    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.992352    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.996129    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:42.996129    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Audit-Id: 9840de94-cd18-4dff-bb82-8a149fd3cfe0
	I0210 12:25:42.996129    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.996211    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.996211    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.996211    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.996463    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:42.996608    5644 pod_ready.go:93] pod "etcd-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:42.996632    5644 pod_ready.go:82] duration metric: took 6.1741ms for pod "etcd-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.996632    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:42.996744    5644 type.go:168] "Request Body" body=""
	I0210 12:25:42.996744    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-032400
	I0210 12:25:42.996817    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:42.996817    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:42.996817    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:42.999005    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:42.999005    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:42.999005    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Audit-Id: 8d9132ed-ef43-49eb-ade8-43ae3c18157f
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:42.999464    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:42.999464    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:42.999824    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  af 35 0a af 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 39 65 36 38 38 61 61  |ystem".*$9e688aa|
		00000050  65 2d 30 39 64 61 2d 34  62 35 63 2d 62 61 34 64  |e-09da-4b5c-ba4d|
		00000060  2d 64 65 36 61 61 36 34  63 62 33 34 65 32 04 31  |-de6aa64cb34e2.1|
		00000070  38 36 36 38 00 42 08 08  e6 de a7 bd 06 10 00 5a  |8668.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 56 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebV.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32856 chars]
	 >
	I0210 12:25:42.999959    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.000034    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.000034    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.000052    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.000088    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.002027    5644 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0210 12:25:43.002027    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Audit-Id: 764e1440-a72c-442a-9b52-b86169ecb8ef
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.002027    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.002027    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.002027    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.002027    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.002027    5644 pod_ready.go:93] pod "kube-apiserver-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.002027    5644 pod_ready.go:82] duration metric: took 5.3953ms for pod "kube-apiserver-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.002027    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.002027    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.002027    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-032400
	I0210 12:25:43.002027    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.002027    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.002027    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.005214    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:43.005214    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.005214    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.005214    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.005214    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.005538    5644 round_trippers.go:587]     Audit-Id: 7ee4de1b-71cf-4355-8593-45069b93f763
	I0210 12:25:43.005810    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  df 31 0a 9b 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 30 33 32 34 30 30 12  |ultinode-032400.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 63 33 65 35 30 35  34 30 2d 64 64 36 36 2d  |*$c3e50540-dd66-|
		00000060  34 37 61 30 2d 61 34 33  33 2d 36 32 32 64 66 35  |47a0-a433-622df5|
		00000070  39 66 62 34 34 31 32 04  31 38 38 32 38 00 42 08  |9fb4412.18828.B.|
		00000080  08 8b d4 a7 bd 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30565 chars]
	 >
	I0210 12:25:43.005998    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.006060    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.006060    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.006060    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.006124    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.008437    5644 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0210 12:25:43.008526    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.008526    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.008526    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Audit-Id: d394fb82-5d3b-4969-abc1-f95d81c3f240
	I0210 12:25:43.008526    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.008720    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.008720    5644 pod_ready.go:93] pod "kube-controller-manager-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.008720    5644 pod_ready.go:82] duration metric: took 6.693ms for pod "kube-controller-manager-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.008720    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.008720    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.168958    5644 request.go:661] Waited for 160.2359ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:25:43.168958    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rrh82
	I0210 12:25:43.168958    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.168958    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.168958    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.173132    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.173132    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.173132    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.173132    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.173132    5644 round_trippers.go:587]     Audit-Id: 712c075c-8954-41d5-9aa1-918e0bd9775e
	I0210 12:25:43.173132    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  91 26 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 72 72 68 38 32 12  0b 6b 75 62 65 2d 70 72  |y-rrh82..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 64  37 66 32 38 31 2d 66 30  |m".*$9ad7f281-f0|
		00000050  32 32 2d 34 66 33 62 2d  62 32 30 36 2d 33 39 63  |22-4f3b-b206-39c|
		00000060  65 34 32 37 31 33 63 66  39 32 04 31 38 34 34 38  |e42713cf92.18448|
		00000070  00 42 08 08 92 d4 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23220 chars]
	 >
	I0210 12:25:43.173900    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.369401    5644 request.go:661] Waited for 195.4287ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.369401    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:43.369401    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.369401    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.369401    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.373555    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.373555    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Audit-Id: e0f074fb-e763-4414-9b1f-3cd7688c9edc
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.373555    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.373555    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.373555    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.373555    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:43.373555    5644 pod_ready.go:93] pod "kube-proxy-rrh82" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:43.373555    5644 pod_ready.go:82] duration metric: took 364.8304ms for pod "kube-proxy-rrh82" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.373555    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.373555    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.568877    5644 request.go:661] Waited for 195.3203ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:25:43.569084    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tbtqd
	I0210 12:25:43.569084    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.569084    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.569084    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.573818    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.573818    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.573818    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Audit-Id: 12e60692-567a-4bb9-b87e-fc9f5e88f78f
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.573818    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.573818    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.573818    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  aa 26 0a c3 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 74 62 74 71 64 12  0b 6b 75 62 65 2d 70 72  |y-tbtqd..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 64 66  38 63 62 31 30 2d 30 35  |m".*$bdf8cb10-05|
		00000050  62 65 2d 34 36 30 62 2d  61 39 63 36 2d 62 63 35  |be-460b-a9c6-bc5|
		00000060  31 65 61 38 38 34 32 36  38 32 04 31 37 34 32 38  |1ea8842682.17428|
		00000070  00 42 08 08 e9 d7 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23308 chars]
	 >
	I0210 12:25:43.574540    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.768931    5644 request.go:661] Waited for 194.3887ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:25:43.768931    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m03
	I0210 12:25:43.768931    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.768931    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.768931    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.774161    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:43.774253    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.774253    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.774335    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.774355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.774355    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Content-Length: 3883
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.774355    5644 round_trippers.go:587]     Audit-Id: 7b52aa9a-de2b-43f8-93a1-e7960612a5dc
	I0210 12:25:43.774617    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 94 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 33 12 00 1a 00  |e-032400-m03....|
		00000030  22 00 2a 24 65 33 35 38  36 61 30 65 2d 35 36 63  |".*$e3586a0e-56c|
		00000040  30 2d 34 65 34 39 2d 39  64 64 33 2d 38 33 65 35  |0-4e49-9dd3-83e5|
		00000050  32 39 63 66 65 35 63 34  32 04 31 38 35 34 38 00  |29cfe5c42.18548.|
		00000060  42 08 08 db dc a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18168 chars]
	 >
	I0210 12:25:43.774784    5644 pod_ready.go:98] node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:25:43.774852    5644 pod_ready.go:82] duration metric: took 401.2929ms for pod "kube-proxy-tbtqd" in "kube-system" namespace to be "Ready" ...
	E0210 12:25:43.774852    5644 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-032400-m03" hosting pod "kube-proxy-tbtqd" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-032400-m03" has status "Ready":"Unknown"
	I0210 12:25:43.774921    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:43.775058    5644 type.go:168] "Request Body" body=""
	I0210 12:25:43.968879    5644 request.go:661] Waited for 193.7779ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:25:43.968879    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xltxj
	I0210 12:25:43.968879    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:43.968879    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:43.968879    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:43.973860    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:43.974005    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:43.974077    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:43.974077    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:43 GMT
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Audit-Id: 4b5d66c8-083d-4e8c-8f15-62926090b727
	I0210 12:25:43.974077    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:43.974077    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ab 25 0a c1 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 78 6c 74 78 6a 12  0b 6b 75 62 65 2d 70 72  |y-xltxj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 39 61 35  65 35 38 62 63 2d 35 34  |m".*$9a5e58bc-54|
		00000050  62 31 2d 34 33 62 39 2d  61 38 38 39 2d 30 64 35  |b1-43b9-a889-0d5|
		00000060  30 64 34 33 35 61 66 38  33 32 04 32 31 33 38 38  |0d435af832.21388|
		00000070  00 42 08 08 d0 d5 a7 bd  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 35 36 36 64 37 62 39  |on-hash..566d7b9|
		000000a0  66 38 35 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |f85Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 22740 chars]
	 >
	I0210 12:25:43.974782    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.169599    5644 request.go:661] Waited for 194.814ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:44.169599    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400-m02
	I0210 12:25:44.169599    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.169599    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.169599    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.174050    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:44.174125    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.174125    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.174158    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.174158    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Content-Length: 3520
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Audit-Id: ec0dd215-6fe9-45b0-8feb-6cbb3e83bd31
	I0210 12:25:44.174158    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.174424    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 a9 1b 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 30 33 32 34 30 30  2d 6d 30 32 12 00 1a 00  |e-032400-m02....|
		00000030  22 00 2a 24 31 62 61 64  31 61 35 35 2d 65 61 64  |".*$1bad1a55-ead|
		00000040  30 2d 34 66 35 62 2d 61  36 33 62 2d 66 66 66 30  |0-4f5b-a63b-fff0|
		00000050  61 32 35 34 66 33 32 62  32 04 32 31 36 37 38 00  |a254f32b2.21678.|
		00000060  42 08 08 b6 e0 a7 bd 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16356 chars]
	 >
	I0210 12:25:44.174632    5644 pod_ready.go:93] pod "kube-proxy-xltxj" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:44.174632    5644 pod_ready.go:82] duration metric: took 399.7062ms for pod "kube-proxy-xltxj" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.174632    5644 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.174801    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.368937    5644 request.go:661] Waited for 194.1346ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:25:44.368937    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-032400
	I0210 12:25:44.368937    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.368937    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.368937    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.374302    5644 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0210 12:25:44.374457    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.374457    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.374457    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Audit-Id: b0045d61-8652-4a0d-9d67-7a5b83b426d6
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.374457    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.374725    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ea 23 0a 83 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  30 33 32 34 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |032400....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 63 66 62 63 62 66 33  |ystem".*$cfbcbf3|
		00000050  37 2d 36 35 63 65 2d 34  62 32 31 2d 39 66 32 36  |7-65ce-4b21-9f26|
		00000060  2d 31 38 64 61 66 63 36  65 34 34 38 30 32 04 31  |-18dafc6e44802.1|
		00000070  38 37 38 38 00 42 08 08  88 d4 a7 bd 06 10 00 5a  |8788.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21728 chars]
	 >
	I0210 12:25:44.374994    5644 type.go:168] "Request Body" body=""
	I0210 12:25:44.569521    5644 request.go:661] Waited for 194.5248ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:44.569521    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes/multinode-032400
	I0210 12:25:44.569521    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.569521    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.569521    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.573300    5644 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0210 12:25:44.574228    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.574228    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.574228    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Audit-Id: 37a316d4-24c6-4f51-8f41-8096cf64635e
	I0210 12:25:44.574228    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.574495    5644 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d4 24 0a f8 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..$.....multinod|
		00000020  65 2d 30 33 32 34 30 30  12 00 1a 00 22 00 2a 24  |e-032400....".*$|
		00000030  61 30 38 30 31 35 65 66  2d 65 35 32 30 2d 34 31  |a08015ef-e520-41|
		00000040  63 62 2d 61 65 61 30 2d  31 64 39 63 38 31 65 30  |cb-aea0-1d9c81e0|
		00000050  31 62 32 36 32 04 31 39  33 35 38 00 42 08 08 86  |1b262.19358.B...|
		00000060  d4 a7 bd 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22276 chars]
	 >
	I0210 12:25:44.574734    5644 pod_ready.go:93] pod "kube-scheduler-multinode-032400" in "kube-system" namespace has status "Ready":"True"
	I0210 12:25:44.574734    5644 pod_ready.go:82] duration metric: took 400.0976ms for pod "kube-scheduler-multinode-032400" in "kube-system" namespace to be "Ready" ...
	I0210 12:25:44.574734    5644 pod_ready.go:39] duration metric: took 1.6013107s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0210 12:25:44.574842    5644 system_svc.go:44] waiting for kubelet service to be running ....
	I0210 12:25:44.583512    5644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:25:44.609128    5644 system_svc.go:56] duration metric: took 34.2859ms WaitForService to wait for kubelet
	I0210 12:25:44.609218    5644 kubeadm.go:582] duration metric: took 16.877264s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0210 12:25:44.609218    5644 node_conditions.go:102] verifying NodePressure condition ...
	I0210 12:25:44.609400    5644 type.go:204] "Request Body" body=""
	I0210 12:25:44.769309    5644 request.go:661] Waited for 159.9073ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.129.181:8443/api/v1/nodes
	I0210 12:25:44.769309    5644 round_trippers.go:470] GET https://172.29.129.181:8443/api/v1/nodes
	I0210 12:25:44.769309    5644 round_trippers.go:476] Request Headers:
	I0210 12:25:44.769309    5644 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0210 12:25:44.769309    5644 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0210 12:25:44.773865    5644 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0210 12:25:44.774474    5644 round_trippers.go:584] Response Headers:
	I0210 12:25:44.774474    5644 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: 647446bf-0ef9-4398-a7cd-5f4590b0ea01
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Date: Mon, 10 Feb 2025 12:25:44 GMT
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Audit-Id: b58c6e28-e6ee-4252-ac75-5be0122d32fb
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Cache-Control: no-cache, private
	I0210 12:25:44.774474    5644 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0210 12:25:44.774474    5644 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 69758a35-7a9b-40c5-be26-bfb3f8bce2df
	I0210 12:25:44.774888    5644 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 a6 5e 0a  0a 0a 00 12 04 32 31 37  |List..^......217|
		00000020  31 1a 00 12 d4 24 0a f8  11 0a 10 6d 75 6c 74 69  |1....$.....multi|
		00000030  6e 6f 64 65 2d 30 33 32  34 30 30 12 00 1a 00 22  |node-032400...."|
		00000040  00 2a 24 61 30 38 30 31  35 65 66 2d 65 35 32 30  |.*$a08015ef-e520|
		00000050  2d 34 31 63 62 2d 61 65  61 30 2d 31 64 39 63 38  |-41cb-aea0-1d9c8|
		00000060  31 65 30 31 62 32 36 32  04 31 39 33 35 38 00 42  |1e01b262.19358.B|
		00000070  08 08 86 d4 a7 bd 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58764 chars]
	 >
	I0210 12:25:44.775560    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775560    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775560    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775560    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775661    5644 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0210 12:25:44.775661    5644 node_conditions.go:123] node cpu capacity is 2
	I0210 12:25:44.775661    5644 node_conditions.go:105] duration metric: took 166.4411ms to run NodePressure ...
	I0210 12:25:44.775661    5644 start.go:241] waiting for startup goroutines ...
	I0210 12:25:44.775661    5644 start.go:255] writing updated cluster config ...
	I0210 12:25:44.779351    5644 out.go:201] 
	I0210 12:25:44.782404    5644 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:44.795786    5644 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:25:44.795786    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:25:44.800976    5644 out.go:177] * Starting "multinode-032400-m03" worker node in "multinode-032400" cluster
	I0210 12:25:44.802546    5644 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0210 12:25:44.802546    5644 cache.go:56] Caching tarball of preloaded images
	I0210 12:25:44.802546    5644 preload.go:172] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0210 12:25:44.803513    5644 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0210 12:25:44.803513    5644 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-032400\config.json ...
	I0210 12:25:44.812171    5644 start.go:360] acquireMachinesLock for multinode-032400-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0210 12:25:44.812171    5644 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-032400-m03"
	I0210 12:25:44.813242    5644 start.go:96] Skipping create...Using existing machine configuration
	I0210 12:25:44.813242    5644 fix.go:54] fixHost starting: m03
	I0210 12:25:44.813346    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:46.765529    5644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:25:46.765529    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:46.765529    5644 fix.go:112] recreateIfNeeded on multinode-032400-m03: state=Stopped err=<nil>
	W0210 12:25:46.765529    5644 fix.go:138] unexpected machine state, will restart: <nil>
	I0210 12:25:46.768334    5644 out.go:177] * Restarting existing hyperv VM for "multinode-032400-m03" ...
	I0210 12:25:46.770478    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-032400-m03
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:49.608342    5644 main.go:141] libmachine: Waiting for host to start...
	I0210 12:25:49.608342    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:51.683388    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:51.683554    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:51.683616    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:54.029881    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:54.029881    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:55.030941    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:25:57.017922    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 12:25:59.312671    5644 main.go:141] libmachine: [stdout =====>] : 
	I0210 12:25:59.313149    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:26:00.313515    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:26:02.325460    5644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m03 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687686798Z" level=info msg="shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687806803Z" level=warning msg="cleaning up after shim disconnected" id=e57ea4c7f300b864e801bda292b196e3a270d6bd3eb9cfac6bf5f66a858f9f7c namespace=moby
	Feb 10 12:22:30 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:30.687819404Z" level=info msg="cleaning up dead shim" namespace=moby
	Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.147917374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148327693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.148398496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:22:44 multinode-032400 dockerd[1108]: time="2025-02-10T12:22:44.150441890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.123856000Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127018550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127103354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.127439770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706/resolv.conf as [nameserver 172.29.128.1]"
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402609649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402766755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402783356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.402879760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 cri-dockerd[1381]: time="2025-02-10T12:23:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0f8bbdd2fed29af0f92968c554c574d411a6dcf8a8d801926012379ff2a258af/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.617663803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.618733746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619050158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.619308269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840758177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840943685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.840973286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 10 12:23:03 multinode-032400 dockerd[1108]: time="2025-02-10T12:23:03.848920902Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ab1277406daa9       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   0f8bbdd2fed29       busybox-58667487b6-8shfg
	9240ce80f94ce       c69fa2e9cbf5f                                                                                         3 minutes ago       Running             coredns                   1                   e58006549b603       coredns-668d6bf9bc-w8rr9
	59ace13383a7f       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   bd998e6ebeb39       storage-provisioner
	efc2d4164d811       d300845f67aeb                                                                                         4 minutes ago       Running             kindnet-cni               1                   e5b54589cf0f1       kindnet-c2mb8
	e57ea4c7f300b       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   bd998e6ebeb39       storage-provisioner
	6640b4e3d696c       e29f9c7391fd9                                                                                         4 minutes ago       Running             kube-proxy                1                   b9afdceca416d       kube-proxy-rrh82
	bd1666238ae65       019ee182b58e2                                                                                         4 minutes ago       Running             kube-controller-manager   1                   9c3e574a33498       kube-controller-manager-multinode-032400
	f368bd8767741       95c0bda56fc4d                                                                                         4 minutes ago       Running             kube-apiserver            0                   ae5696c38864a       kube-apiserver-multinode-032400
	2c0b973818252       a9e7e6b294baf                                                                                         4 minutes ago       Running             etcd                      0                   016ad4d720680       etcd-multinode-032400
	440b6adf4512a       2b0d6572d062c                                                                                         4 minutes ago       Running             kube-scheduler            1                   8059b20f65945       kube-scheduler-multinode-032400
	8d0c6584f2b12       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   ed034f1578c96       busybox-58667487b6-8shfg
	c5b854dbb9121       c69fa2e9cbf5f                                                                                         26 minutes ago      Exited              coredns                   0                   794995bca6b5b       coredns-668d6bf9bc-w8rr9
	4439940fa5f42       kindest/kindnetd@sha256:56ea59f77258052c4506076525318ffa66817500f68e94a50fdf7d600a280d26              27 minutes ago      Exited              kindnet-cni               0                   26d9e119a02c5       kindnet-c2mb8
	148309413de8d       e29f9c7391fd9                                                                                         27 minutes ago      Exited              kube-proxy                0                   a70f430921ec2       kube-proxy-rrh82
	adf520f9b9d78       2b0d6572d062c                                                                                         27 minutes ago      Exited              kube-scheduler            0                   d33433fbce480       kube-scheduler-multinode-032400
	9408ce83d7d38       019ee182b58e2                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   ee16b295f58db       kube-controller-manager-multinode-032400
	
	
	==> coredns [9240ce80f94c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fef4eccd4948b7d76fb3b866fead119cfbf3f792b566bc1c23dd6fceadf676c5da93bdd173179090b2c74bc875a74fc76ef5e51dcb6a910d6a8b189470a4fe6b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39941 - 15725 "HINFO IN 3724374125237206977.4573805755432620257. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.053165792s
	
	
	==> coredns [c5b854dbb912] <==
	[INFO] 10.244.0.3:55342 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000074701s
	[INFO] 10.244.0.3:52814 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156002s
	[INFO] 10.244.0.3:36559 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	[INFO] 10.244.0.3:35829 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000551s
	[INFO] 10.244.0.3:38348 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000180702s
	[INFO] 10.244.0.3:39722 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000109501s
	[INFO] 10.244.0.3:40924 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000201202s
	[INFO] 10.244.1.2:34735 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125801s
	[INFO] 10.244.1.2:36088 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233303s
	[INFO] 10.244.1.2:55464 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000190403s
	[INFO] 10.244.1.2:57911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000176202s
	[INFO] 10.244.0.3:34977 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000276903s
	[INFO] 10.244.0.3:60203 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181802s
	[INFO] 10.244.0.3:40189 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167902s
	[INFO] 10.244.0.3:59008 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133001s
	[INFO] 10.244.1.2:56936 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191602s
	[INFO] 10.244.1.2:36536 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000153201s
	[INFO] 10.244.1.2:38856 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000169502s
	[INFO] 10.244.1.2:53005 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000186102s
	[INFO] 10.244.0.3:43643 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00092191s
	[INFO] 10.244.0.3:44109 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000341503s
	[INFO] 10.244.0.3:37196 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095701s
	[INFO] 10.244.0.3:33917 - 5 "PTR IN 1.128.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000152102s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-032400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-032400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=multinode-032400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_02_10T11_59_08_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 11:59:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-032400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:26:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 11:59:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:22:48 +0000   Mon, 10 Feb 2025 12:22:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.129.181
	  Hostname:    multinode-032400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd9d6a2450e6476eba748aa8fab044bb
	  System UUID:                43aa2284-9342-094e-a67f-da5d9d45fabd
	  Boot ID:                    f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-8shfg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-668d6bf9bc-w8rr9                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-032400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m38s
	  kube-system                 kindnet-c2mb8                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-032400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 kube-controller-manager-multinode-032400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-rrh82                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-032400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 4m34s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     27m (x2 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  27m (x2 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x2 over 27m)      kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	  Normal   NodeReady                26m                    kubelet          Node multinode-032400 status is now: NodeReady
	  Normal   Starting                 4m44s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m43s (x8 over 4m44s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m43s (x8 over 4m44s)  kubelet          Node multinode-032400 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m43s (x7 over 4m44s)  kubelet          Node multinode-032400 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m38s                  kubelet          Node multinode-032400 has been rebooted, boot id: f18b3f97-a44b-4ae9-ad96-af2bf29d6df2
	  Normal   RegisteredNode           4m35s                  node-controller  Node multinode-032400 event: Registered Node multinode-032400 in Controller
	
	
	Name:               multinode-032400-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-032400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=multinode-032400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T12_25_27_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:25:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-032400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:26:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 10 Feb 2025 12:25:42 +0000   Mon, 10 Feb 2025 12:25:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 10 Feb 2025 12:25:42 +0000   Mon, 10 Feb 2025 12:25:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 10 Feb 2025 12:25:42 +0000   Mon, 10 Feb 2025 12:25:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 10 Feb 2025 12:25:42 +0000   Mon, 10 Feb 2025 12:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.131.248
	  Hostname:    multinode-032400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6505006587e1468599b41c56e008a417
	  System UUID:                9f935ab2-d180-914c-86d6-dc00ab51b7e9
	  Boot ID:                    9d6581c7-7c7f-484f-97d7-dd6702bddbb0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-wxrnn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kindnet-tv6gk               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-xltxj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                kubelet          Node multinode-032400-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  70s (x2 over 70s)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x2 over 70s)  kubelet          Node multinode-032400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x2 over 70s)  kubelet          Node multinode-032400-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           65s                node-controller  Node multinode-032400-m02 event: Registered Node multinode-032400-m02 in Controller
	  Normal  NodeReady                54s                kubelet          Node multinode-032400-m02 status is now: NodeReady
	
	
	Name:               multinode-032400-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-032400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
	                    minikube.k8s.io/name=multinode-032400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_02_10T12_17_31_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 10 Feb 2025 12:17:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-032400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 10 Feb 2025 12:18:32 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 10 Feb 2025 12:17:46 +0000   Mon, 10 Feb 2025 12:19:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.29.129.10
	  Hostname:    multinode-032400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 96be2fd73058434db5f7c29ebdc5358c
	  System UUID:                d4861806-0b5d-3746-b974-5781fc0e801c
	  Boot ID:                    e5a245dc-eb44-45ba-a280-4e410deb107b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jcmlf       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-tbtqd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 19m                  kube-proxy       
	  Normal  Starting                 9m1s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)    kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)    kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)    kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                  kubelet          Node multinode-032400-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  9m5s (x2 over 9m6s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m5s (x2 over 9m6s)  kubelet          Node multinode-032400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m5s (x2 over 9m6s)  kubelet          Node multinode-032400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m4s                 node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	  Normal  NodeReady                8m50s                kubelet          Node multinode-032400-m03 status is now: NodeReady
	  Normal  NodeNotReady             7m9s                 node-controller  Node multinode-032400-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           4m35s                node-controller  Node multinode-032400-m03 event: Registered Node multinode-032400-m03 in Controller
	
	
	==> dmesg <==
	              * this clock source is slow. Consider trying other clock sources
	[  +6.593821] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.312003] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.333341] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.447705] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb10 12:21] systemd-fstab-generator[632]: Ignoring "noauto" option for root device
	[  +0.179509] systemd-fstab-generator[644]: Ignoring "noauto" option for root device
	[ +24.780658] systemd-fstab-generator[1027]: Ignoring "noauto" option for root device
	[  +0.095136] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.491986] systemd-fstab-generator[1067]: Ignoring "noauto" option for root device
	[  +0.201276] systemd-fstab-generator[1079]: Ignoring "noauto" option for root device
	[  +0.245181] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +2.938213] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.187151] systemd-fstab-generator[1346]: Ignoring "noauto" option for root device
	[  +0.177538] systemd-fstab-generator[1358]: Ignoring "noauto" option for root device
	[  +0.257474] systemd-fstab-generator[1373]: Ignoring "noauto" option for root device
	[  +0.873551] systemd-fstab-generator[1498]: Ignoring "noauto" option for root device
	[  +0.096158] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.180590] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	[  +1.822530] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.251804] kauditd_printk_skb: 5 callbacks suppressed
	[Feb10 12:22] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	[ +27.335254] kauditd_printk_skb: 72 callbacks suppressed
	
	
	==> etcd [2c0b97381825] <==
	{"level":"info","ts":"2025-02-10T12:21:54.848857Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:21:54.848872Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-02-10T12:21:54.849451Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-02-10T12:21:54.848623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f switched to configuration voters=(11442668490702585487)"}
	{"level":"info","ts":"2025-02-10T12:21:54.849568Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","added-peer-id":"9ecc865dcee1fe8f","added-peer-peer-urls":["https://172.29.136.201:2380"]}
	{"level":"info","ts":"2025-02-10T12:21:54.849668Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"939f48ba2d0ec869","local-member-id":"9ecc865dcee1fe8f","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:21:54.849697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-02-10T12:21:54.848659Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.129.181:2380"}
	{"level":"info","ts":"2025-02-10T12:21:54.850838Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.129.181:2380"}
	{"level":"info","ts":"2025-02-10T12:21:56.088224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f is starting a new election at term 2"}
	{"level":"info","ts":"2025-02-10T12:21:56.088502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became pre-candidate at term 2"}
	{"level":"info","ts":"2025-02-10T12:21:56.088614Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgPreVoteResp from 9ecc865dcee1fe8f at term 2"}
	{"level":"info","ts":"2025-02-10T12:21:56.088706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became candidate at term 3"}
	{"level":"info","ts":"2025-02-10T12:21:56.088790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f received MsgVoteResp from 9ecc865dcee1fe8f at term 3"}
	{"level":"info","ts":"2025-02-10T12:21:56.088924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9ecc865dcee1fe8f became leader at term 3"}
	{"level":"info","ts":"2025-02-10T12:21:56.089002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9ecc865dcee1fe8f elected leader 9ecc865dcee1fe8f at term 3"}
	{"level":"info","ts":"2025-02-10T12:21:56.095452Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"9ecc865dcee1fe8f","local-member-attributes":"{Name:multinode-032400 ClientURLs:[https://172.29.129.181:2379]}","request-path":"/0/members/9ecc865dcee1fe8f/attributes","cluster-id":"939f48ba2d0ec869","publish-timeout":"7s"}
	{"level":"info","ts":"2025-02-10T12:21:56.095630Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:21:56.098215Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-02-10T12:21:56.098453Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-02-10T12:21:56.095689Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-02-10T12:21:56.109624Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:21:56.115942Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-02-10T12:21:56.120127Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-02-10T12:21:56.126374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.129.181:2379"}
	
	
	==> kernel <==
	 12:26:36 up 6 min,  0 users,  load average: 0.06, 0.19, 0.10
	Linux multinode-032400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [4439940fa5f4] <==
	I0210 12:18:50.447008       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:19:00.448621       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:19:00.448734       1 main.go:301] handling current node
	I0210 12:19:00.448755       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:19:00.448763       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:19:00.449245       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:19:00.449348       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:19:10.454264       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:19:10.454382       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:19:10.454633       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:19:10.454659       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:19:10.454747       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:19:10.454768       1 main.go:301] handling current node
	I0210 12:19:20.454254       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:19:20.454380       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:19:20.454728       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:19:20.454888       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:19:20.455123       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:19:20.455136       1 main.go:301] handling current node
	I0210 12:19:30.445655       1 main.go:297] Handling node with IPs: map[172.29.136.201:{}]
	I0210 12:19:30.445708       1 main.go:301] handling current node
	I0210 12:19:30.445727       1 main.go:297] Handling node with IPs: map[172.29.143.51:{}]
	I0210 12:19:30.445734       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:19:30.446565       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:19:30.446658       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kindnet [efc2d4164d81] <==
	I0210 12:25:51.776344       1 main.go:301] handling current node
	I0210 12:26:01.774384       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:26:01.774422       1 main.go:301] handling current node
	I0210 12:26:01.774442       1 main.go:297] Handling node with IPs: map[172.29.131.248:{}]
	I0210 12:26:01.774448       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:26:01.774769       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:26:01.774893       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:26:11.783631       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:26:11.783671       1 main.go:301] handling current node
	I0210 12:26:11.783690       1 main.go:297] Handling node with IPs: map[172.29.131.248:{}]
	I0210 12:26:11.783696       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:26:11.784708       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:26:11.784909       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:26:21.778470       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:26:21.778735       1 main.go:301] handling current node
	I0210 12:26:21.778758       1 main.go:297] Handling node with IPs: map[172.29.131.248:{}]
	I0210 12:26:21.779390       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:26:21.780146       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:26:21.780305       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	I0210 12:26:31.783387       1 main.go:297] Handling node with IPs: map[172.29.129.181:{}]
	I0210 12:26:31.783664       1 main.go:301] handling current node
	I0210 12:26:31.783852       1 main.go:297] Handling node with IPs: map[172.29.131.248:{}]
	I0210 12:26:31.783956       1 main.go:324] Node multinode-032400-m02 has CIDR [10.244.1.0/24] 
	I0210 12:26:31.784468       1 main.go:297] Handling node with IPs: map[172.29.129.10:{}]
	I0210 12:26:31.784538       1 main.go:324] Node multinode-032400-m03 has CIDR [10.244.4.0/24] 
	
	
	==> kube-apiserver [f368bd876774] <==
	I0210 12:21:58.208845       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0210 12:21:58.215564       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0210 12:21:58.251691       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0210 12:21:58.252469       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0210 12:21:58.253733       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0210 12:21:58.254978       1 shared_informer.go:320] Caches are synced for configmaps
	I0210 12:21:58.255116       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0210 12:21:58.255193       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0210 12:21:58.258943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0210 12:21:58.265934       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0210 12:21:58.282090       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0210 12:21:58.282373       1 aggregator.go:171] initial CRD sync complete...
	I0210 12:21:58.282566       1 autoregister_controller.go:144] Starting autoregister controller
	I0210 12:21:58.282687       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0210 12:21:58.282810       1 cache.go:39] Caches are synced for autoregister controller
	I0210 12:21:59.040333       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0210 12:21:59.110768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0210 12:21:59.681925       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.129.181]
	I0210 12:21:59.683607       1 controller.go:615] quota admission added evaluator for: endpoints
	I0210 12:21:59.696506       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0210 12:22:01.024466       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0210 12:22:01.275329       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0210 12:22:01.581954       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0210 12:22:01.624349       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0210 12:22:01.647929       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9408ce83d7d3] <==
	I0210 12:15:17.429973       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:15:17.456736       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:15:22.504479       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:20.800246       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:20.822743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:25.699359       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:17:31.139874       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m03\" does not exist"
	I0210 12:17:31.140321       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:17:31.172716       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m03" podCIDRs=["10.244.4.0/24"]
	I0210 12:17:31.172758       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:31.172780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:31.210555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:31.721125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:32.574669       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:41.246655       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:46.096472       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:17:46.097093       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:46.112921       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:17:47.511367       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:18:23.730002       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:19:22.675608       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400"
	I0210 12:19:27.542457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:19:27.543090       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:19:27.562043       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	I0210 12:19:32.704197       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m03"
	
	
	==> kube-controller-manager [bd1666238ae6] <==
	I0210 12:25:12.628742       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="175.506µs"
	E0210 12:25:21.466676       1 gc_controller.go:151] "Failed to get node" err="node \"multinode-032400-m02\" not found" logger="pod-garbage-collector-controller" node="multinode-032400-m02"
	I0210 12:25:26.744519       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-032400-m02\" does not exist"
	I0210 12:25:26.771500       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-032400-m02" podCIDRs=["10.244.1.0/24"]
	I0210 12:25:26.771545       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:26.772140       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:26.781605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.001µs"
	I0210 12:25:26.804266       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:27.193015       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:27.752024       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:28.556111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="49.001µs"
	I0210 12:25:31.724054       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:36.824847       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:42.721636       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-032400-m02"
	I0210 12:25:42.721869       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:42.740014       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:42.750053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="152.905µs"
	I0210 12:25:46.673132       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-032400-m02"
	I0210 12:25:53.638058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="52.602µs"
	I0210 12:25:53.826870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="55.202µs"
	I0210 12:25:53.832238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="116.004µs"
	I0210 12:26:03.559449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="76.703µs"
	I0210 12:26:03.584286       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="185.306µs"
	I0210 12:26:04.926562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.028213ms"
	I0210 12:26:04.927403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.002µs"
	
	
	==> kube-proxy [148309413de8] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 11:59:18.694686       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 11:59:19.251769       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.136.201"]
	E0210 11:59:19.252111       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 11:59:19.312427       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 11:59:19.312556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 11:59:19.312585       1 server_linux.go:170] "Using iptables Proxier"
	I0210 11:59:19.317423       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 11:59:19.318586       1 server.go:497] "Version info" version="v1.32.1"
	I0210 11:59:19.318681       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 11:59:19.321084       1 config.go:199] "Starting service config controller"
	I0210 11:59:19.321134       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 11:59:19.321326       1 config.go:105] "Starting endpoint slice config controller"
	I0210 11:59:19.321339       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 11:59:19.322061       1 config.go:329] "Starting node config controller"
	I0210 11:59:19.322091       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 11:59:19.421972       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 11:59:19.422030       1 shared_informer.go:320] Caches are synced for service config
	I0210 11:59:19.423298       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [6640b4e3d696] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0210 12:22:01.068390       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0210 12:22:01.115566       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.129.181"]
	E0210 12:22:01.115738       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0210 12:22:01.217636       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0210 12:22:01.217732       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0210 12:22:01.217759       1 server_linux.go:170] "Using iptables Proxier"
	I0210 12:22:01.222523       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0210 12:22:01.224100       1 server.go:497] "Version info" version="v1.32.1"
	I0210 12:22:01.224411       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:22:01.230640       1 config.go:199] "Starting service config controller"
	I0210 12:22:01.232948       1 config.go:105] "Starting endpoint slice config controller"
	I0210 12:22:01.233483       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0210 12:22:01.233538       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0210 12:22:01.243389       1 config.go:329] "Starting node config controller"
	I0210 12:22:01.243415       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0210 12:22:01.335845       1 shared_informer.go:320] Caches are synced for service config
	I0210 12:22:01.335895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0210 12:22:01.345506       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [440b6adf4512] <==
	I0210 12:21:56.035084       1 serving.go:386] Generated self-signed cert in-memory
	W0210 12:21:58.137751       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0210 12:21:58.138031       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0210 12:21:58.138229       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0210 12:21:58.138350       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0210 12:21:58.239766       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0210 12:21:58.239885       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0210 12:21:58.246917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:21:58.246962       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:21:58.248143       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0210 12:21:58.248624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0210 12:21:58.347443       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [adf520f9b9d7] <==
	W0210 11:59:04.210028       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.210075       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.217612       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.217750       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.256700       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.256904       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.322264       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0210 11:59:04.322526       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.330202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 11:59:04.330584       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:04.340076       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0210 11:59:04.340159       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:05.486363       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0210 11:59:05.486509       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:05.818248       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0210 11:59:05.818617       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:06.039223       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0210 11:59:06.039458       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0210 11:59:06.087664       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0210 11:59:06.087837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0210 11:59:07.187201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0210 12:19:35.481531       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0210 12:19:35.493085       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0210 12:19:35.493424       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0210 12:19:35.644426       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982545    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-8shfg" podUID="a3e86dc5-0523-4852-af77-3145d44eaa15"
	Feb 10 12:22:46 multinode-032400 kubelet[1648]: E0210 12:22:46.982810    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-w8rr9" podUID="e45a37bf-e7da-4129-bb7e-8be7dbe93e09"
	Feb 10 12:22:52 multinode-032400 kubelet[1648]: I0210 12:22:52.997714    1648 scope.go:117] "RemoveContainer" containerID="3ae31c3c37c9f6044b62cd6f5c446e15340e314275faee0b1de4d66baa716012"
	Feb 10 12:22:53 multinode-032400 kubelet[1648]: E0210 12:22:53.021472    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:22:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:22:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:22:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:22:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:22:53 multinode-032400 kubelet[1648]: I0210 12:22:53.042255    1648 scope.go:117] "RemoveContainer" containerID="9f1c4e9b3353b37f1f4c40563f1e312a49acd43d398e81cab61930d654fbb0d9"
	Feb 10 12:23:03 multinode-032400 kubelet[1648]: I0210 12:23:03.312056    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58006549b603113870cd02660dd94559d10ecb0913a1bdd2c81910c3d60a706"
	Feb 10 12:23:53 multinode-032400 kubelet[1648]: E0210 12:23:53.019427    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:23:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:23:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:23:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:23:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:24:53 multinode-032400 kubelet[1648]: E0210 12:24:53.019896    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:24:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:24:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:24:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:24:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Feb 10 12:25:53 multinode-032400 kubelet[1648]: E0210 12:25:53.019518    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Feb 10 12:25:53 multinode-032400 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Feb 10 12:25:53 multinode-032400 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Feb 10 12:25:53 multinode-032400 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Feb 10 12:25:53 multinode-032400 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-032400 -n multinode-032400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-032400 -n multinode-032400: (10.950041s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-032400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (516.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (300.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-189000 --driver=hyperv
E0210 12:43:55.700826   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:44:42.461330   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-189000 --driver=hyperv: exit status 1 (4m59.7897266s)

                                                
                                                
-- stdout --
	* [NoKubernetes-189000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-189000" primary control-plane node in "NoKubernetes-189000" cluster

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-189000 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-189000 -n NoKubernetes-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-189000 -n NoKubernetes-189000: exit status 7 (282.4091ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-189000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (300.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10800.418s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-299200 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75b0434f-87be-438c-8218-af8ed9f648a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
panic: test timed out after 3h0m0s
	running tests:
		TestNetworkPlugins (38m33s)
		TestStartStop (38m33s)
		TestStartStop/group/default-k8s-diff-port (6m13s)
		TestStartStop/group/default-k8s-diff-port/serial (6m13s)
		TestStartStop/group/default-k8s-diff-port/serial/DeployApp (1s)
		TestStartStop/group/embed-certs (9m48s)
		TestStartStop/group/embed-certs/serial (9m48s)
		TestStartStop/group/embed-certs/serial/SecondStart (3m13s)
		TestStartStop/group/no-preload (8m16s)
		TestStartStop/group/no-preload/serial (8m16s)
		TestStartStop/group/no-preload/serial/SecondStart (29s)
		TestStartStop/group/old-k8s-version (12m10s)
		TestStartStop/group/old-k8s-version/serial (12m10s)
		TestStartStop/group/old-k8s-version/serial/SecondStart (4m46s)

                                                
                                                
goroutine 2534 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 7 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000160340, 0xc0007cdbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
testing.runTests(0xc000c0c000, {0x5172ac0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0x24d9f9?, 0x5199200?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc000574460)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc000574460)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 2363 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b90a40, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2370
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2462 [select, 3 minutes]:
os/exec.(*Cmd).watchCtx(0xc000c6a480, 0xc0006da770)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2459
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2530 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x379e8d8, 0xc00089b3b0}, {0x378d8f0, 0xc0006852e0}, 0x1, 0x0, 0xc00008ba20)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x379e8d8?, 0xc0008563f0?}, 0x3b9aca00, 0xc00008bc18?, 0x1, 0xc00008ba20)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x379e8d8, 0xc0008563f0}, 0xc000535380, {0xc001b100e0, 0x1c}, {0x2a71d8f, 0x7}, {0x2a9e8ab, 0x18}, 0x6fc23ac000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.testPodScheduling({0x379e8d8, 0xc0008563f0}, 0xc000535380, {0xc001b100e0, 0x1c})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:325 +0x345
k8s.io/minikube/test/integration.validateDeploying({0x379e8d8, 0xc0008563f0}, 0xc000535380, {0xc001b100e0, 0x1c}, {0x2a96120?, 0xc000c79f60?}, {0x2df5b3?, 0x1c6159b5e5f?}, {0xc0001c0600, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:194 +0xc5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000535380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc000535380, 0xc00068e680)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2357
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 119 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000b1b990, 0x3c)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001539d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37b26e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/queue.go:277 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b1b9c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000cd8000, {0x3760080, 0xc000ce2030}, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000cd8000, 0x3b9aca00, 0x0, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 135
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2425 [syscall]:
syscall.Syscall6(0x1f0c45?, 0x199a68f0a28?, 0xc0014c9b4d?, 0x1915c5?, 0x58?, 0x29d1b00?, 0x1010177ab01?, 0x199ec2e9e18?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x478, {0xc0014c02b0?, 0x550, 0x249bdf?}, 0x2?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001eb0fc8?, {0xc0014c02b0?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001eb0fc8, {0xc0014c02b0, 0x550, 0x550})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000126810, {0xc0014c02b0?, 0xc0014c9d01?, 0x20c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc000565dd0, {0x375e660, 0xc00077e3a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc000565dd0}, {0x375e660, 0xc00077e3a0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000812b60?, {0x375e7e0, 0xc000565dd0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc000565dd0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc000565dd0}, {0x375e740, 0xc000126810}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001865490?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2424
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2333 [chan receive, 3 minutes]:
testing.(*T).Run(0xc0017c64e0, {0x2a7a71a?, 0x1?}, 0xc000d04d00)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017c64e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0017c64e0, 0xc00082e180)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2032
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 120 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x379ebf0, 0xc000078460}, 0xc000737f50, 0xc000737f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x379ebf0, 0xc000078460}, 0xf8?, 0xc000737f50, 0xc000737f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x379ebf0?, 0xc000078460?}, 0xc0005356c0?, 0x2dfe40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x2e0d05?, 0xc0005356c0?, 0xc000b908c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 135
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2413 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b90900, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2027 [chan receive, 13 minutes]:
testing.(*T).Run(0xc000b141a0, {0x2a6f652?, 0x0?}, 0xc00082e980)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b141a0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc000b141a0, 0xc0004c4040)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2026
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 121 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 120
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 134 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37af8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:311 +0x334
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 133
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 135 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000b1b9c0, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 133
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2519 [syscall]:
syscall.Syscall6(0x1f0c45?, 0x0?, 0x0?, 0xc000000000?, 0xc001ed5c20?, 0xc001ed5ba8?, 0x101001882c6?, 0x199ec2e9f58?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x55c, {0xc0015d61f6?, 0x20a, 0x249bdf?}, 0x0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001eb1208?, {0xc0015d61f6?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001eb1208, {0xc0015d61f6, 0x20a, 0x20a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a0eb8, {0xc0015d61f6?, 0xc001f1ab30?, 0x6d?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c08a20, {0x375e660, 0xc000b88278})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc001c08a20}, {0x375e660, 0xc000b88278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375e7e0, 0xc001c08a20})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc001c08a20?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc001c08a20}, {0x375e740, 0xc0005a0eb8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000cd9b40?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2518
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2357 [chan receive]:
testing.(*T).Run(0xc0017c7ba0, {0x2a767c1?, 0xc0015daa80?}, 0xc00068e680)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017c7ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0017c7ba0, 0xc00079a980)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2029
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2412 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37af8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:311 +0x334
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 819 [chan receive, 152 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00053d180, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 777
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2492 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37af8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:311 +0x334
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2530
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2351 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2350
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 662 [IO wait, 162 minutes]:
internal/poll.runtime_pollWait(0x199ec2e1dd0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x248635?, 0x1f09dd?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00048a2a0, 0xc001eb3b88)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc00048a288, 0x4dc, {0xc0008023c0?, 0x2000?, 0x0?}, 0x1eb3c1c?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc00048a288, 0xc001eb3d68)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc00048a288)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc000cc0640)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc000cc0640)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc00037a870, {0x378d2c0, 0xc000cc0640})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc00037a870)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000b14b60)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2230 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 659
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2229 +0x129

                                                
                                                
goroutine 2083 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c7380)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c7380)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c7380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c7380, 0xc00082e700)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 818 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37af8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:311 +0x334
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 777
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2341 [chan receive]:
testing.(*T).Run(0xc0017c7860, {0x2a7a71a?, 0xc0015456c0?}, 0xc00079af80)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017c7860)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc0017c7860, 0xc00082eb00)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2030
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2030 [chan receive, 9 minutes]:
testing.(*T).Run(0xc000b15380, {0x2a6f652?, 0x0?}, 0xc00082eb00)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b15380)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc000b15380, 0xc0004c4100)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2026
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2461 [syscall, 3 minutes]:
syscall.Syscall6(0x1f0c45?, 0x199a68f0108?, 0x20077?, 0xc001883f80?, 0x10?, 0x10?, 0x101001882c6?, 0x199ec185838?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x3c4, {0xc0015b82a6?, 0x1d5a, 0x249bdf?}, 0x19131e?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001774488?, {0xc0015b82a6?, 0x4000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001774488, {0xc0015b82a6, 0x1d5a, 0x1d5a})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a0df0, {0xc0015b82a6?, 0xc001783330?, 0x1e62?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00163a3c0, {0x375e660, 0xc00077e0d0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc00163a3c0}, {0x375e660, 0xc00077e0d0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x375e7e0, 0xc00163a3c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc00163a3c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc00163a3c0}, {0x375e740, 0xc0005a0df0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0018668d0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2459
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 1046 [chan send, 147 minutes]:
os/exec.(*Cmd).watchCtx(0xc001d6c180, 0xc001ca6620)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 768
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 805 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x379ebf0, 0xc000078460}, 0xc0014f9f50, 0xc0014f9f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x379ebf0, 0xc000078460}, 0x90?, 0xc0014f9f50, 0xc0014f9f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x379ebf0?, 0xc000078460?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014f9fd0?, 0x31f724?, 0x3220202020353539?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 819
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 804 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00053d150, 0x36)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001eb7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37b26e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/queue.go:277 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00053d180)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001867430, {0x3760080, 0xc00186f890}, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001867430, 0x3b9aca00, 0x0, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 819
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 806 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 805
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2546 [IO wait]:
internal/poll.runtime_pollWait(0x199ec2e1cb8, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0xaa7d31e31cb8a6ba?, 0x699a74d9e910d862?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00048a7a0, 0x341cd18)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).Read(0xc00048a788, {0xc0014a0c00, 0xc00, 0xc00})
	/usr/local/go/src/internal/poll/fd_windows.go:438 +0x2a7
net.(*netFD).Read(0xc00048a788, {0xc0014a0c00?, 0x10?, 0xc0018018a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000b88408, {0xc0014a0c00?, 0xc0014a0c05?, 0x9bf?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc001b5d4a0, {0xc0014a0c00?, 0x0?, 0xc001b5d4a0?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc00015f438, {0x3760660, 0xc001b5d4a0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00015f188, {0x199ec52f400, 0xc001b5d1d0}, 0xc001801a10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00015f188, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc00015f188, {0xc001559000, 0x1000, 0xc0015daa80?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc0009bd980, {0xc00023ac80, 0x9, 0x5116c10?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x375e880, 0xc0009bd980}, {0xc00023ac80, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc00023ac80, 0x9, 0x1fa745?}, {0x375e880?, 0xc0009bd980?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc00023ac40)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001801fa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:2505 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001a57880)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:2381 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2497
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:912 +0xdfb

                                                
                                                
goroutine 2459 [syscall, 3 minutes]:
syscall.Syscall(0x10?, 0xc001823b88?, 0x1000000195ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x41c, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc000c6a480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000c6a480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000c6a480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc000831040, 0xc000c6a480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x379e8d8, 0xc0000f61c0}, 0xc000831040, {0xc0004fa120, 0x12}, {0x7ffc53db5f50?, 0xc001823f60?}, {0x2df5b3?, 0x1c5a3263fd4?}, {0xc0006eea00, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000831040)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc000831040, 0xc000d04d00)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2333
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2059 [chan receive, 39 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc0017c6680, 0xc001abe1c8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 1984
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2493 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008ebc00, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2530
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2061 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c69c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c69c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c69c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c69c0, 0xc00082e300)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2062 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c6b60)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c6b60)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c6b60)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c6b60, 0xc00082e380)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2524 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008ebbd0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001a2dd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37b26e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/queue.go:277 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008ebc00)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001866bb0, {0x3760080, 0xc000bffe90}, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001866bb0, 0x3b9aca00, 0x0, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2426 [syscall]:
syscall.Syscall6(0x1f0c45?, 0x199a68f0a28?, 0x8000?, 0xc001694e01?, 0xb0?, 0x10?, 0x1010177abb0?, 0x199ec2e4c80?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x78c, {0xc00160cab1?, 0x154f, 0x249bdf?}, 0x2?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001eb1448?, {0xc00160cab1?, 0x8000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001eb1448, {0xc00160cab1, 0x154f, 0x154f})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000126848, {0xc00160cab1?, 0xc0018f1d01?, 0x3e34?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc000b801b0, {0x375e660, 0xc0005a0e18})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc000b801b0}, {0x375e660, 0xc0005a0e18}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000812b60?, {0x375e7e0, 0xc000b801b0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc000b801b0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc000b801b0}, {0x375e740, 0xc000126848}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001c8c150?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2424
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2350 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x379ebf0, 0xc000078460}, 0xc001c7bf50, 0xc001c7bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x379ebf0, 0xc000078460}, 0xa0?, 0xc001c7bf50, 0xc001c7bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x379ebf0?, 0xc000078460?}, 0x0?, 0xc000109340?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001c7bfd0?, 0x31f724?, 0xc001a56c40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2363
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2084 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c7520)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c7520)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c7520)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c7520, 0xc00082e780)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2444 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2443
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2064 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c6ea0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c6ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c6ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c6ea0, 0xc00082e580)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2521 [select]:
os/exec.(*Cmd).watchCtx(0xc000b2d080, 0xc001ca7490)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2518
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2322 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000b15860, {0x2a7a71a?, 0x18965e?}, 0xc00068e380)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc000b15860)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x2af
testing.tRunner(0xc000b15860, 0xc00082e980)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2027
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2443 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x379ebf0, 0xc000078460}, 0xc001713f50, 0xc001713f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x379ebf0, 0xc000078460}, 0x90?, 0xc001713f50, 0xc001713f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x379ebf0?, 0xc000078460?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001713fd0?, 0x31f724?, 0xc001713fa8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2413
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2065 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c7040)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c7040)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c7040)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c7040, 0xc00082e600)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2427 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc000b2d200, 0xc001695110)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2424
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2082 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c71e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c71e0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c71e0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c71e0, 0xc00082e680)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2060 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c6820)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c6820)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c6820)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c6820, 0xc00082e100)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2063 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0017c6d00)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0017c6d00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0017c6d00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0017c6d00, 0xc00082e500)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2059
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2526 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2525
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2028 [chan receive, 39 minutes]:
testing.(*testContext).waitParallel(0xc0006a00a0)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000b144e0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000b144e0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b144e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc000b144e0, 0xc0004c4080)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2026
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2029 [chan receive, 7 minutes]:
testing.(*T).Run(0xc000b151e0, {0x2a6f652?, 0x0?}, 0xc00079a980)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b151e0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc000b151e0, 0xc0004c40c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2026
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1984 [chan receive, 39 minutes]:
testing.(*T).Run(0xc0005349c0, {0x2a6e270?, 0xc000743f60?}, 0xc001abe1c8)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0005349c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0005349c0, 0x341c108)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2525 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x379ebf0, 0xc000078460}, 0xc001717f50, 0xc001717f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x379ebf0, 0xc000078460}, 0xa0?, 0xc001717f50, 0xc001717f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x379ebf0?, 0xc000078460?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001717fd0?, 0x31f724?, 0xc000109ce0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2493
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2362 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x37af8a0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:311 +0x334
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2370
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 2042 [chan receive, 39 minutes]:
testing.(*T).Run(0xc000535860, {0x2a6e270?, 0x2df5b3?}, 0x341c340)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop(0xc000535860)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc000535860, 0x341c150)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2026 [chan receive, 39 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000b14000, 0x341c340)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2042
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2424 [syscall, 5 minutes]:
syscall.Syscall(0x10?, 0xc00150fb88?, 0x1000000195ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x4bc, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc000b2d200?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000b2d200)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000b2d200)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc000161a00, 0xc000b2d200)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x379e8d8, 0xc0000f7650}, 0xc000161a00, {0xc0004fa6d8, 0x16}, {0x7ffc53db5f50?, 0xc00150ff60?}, {0x2df5b3?, 0x1c56bc4d61e?}, {0xc00070a300, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000161a00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc000161a00, 0xc00068e380)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2322
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2032 [chan receive, 9 minutes]:
testing.(*T).Run(0xc000b156c0, {0x2a6f652?, 0x0?}, 0xc00082e180)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000b156c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0xad9
testing.tRunner(0xc000b156c0, 0xc0004c41c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2026
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2442 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b908d0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc000d7bd80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37b26e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/queue.go:277 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b90900)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0018660b0, {0x3760080, 0xc000d6a060}, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0018660b0, 0x3b9aca00, 0x0, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2413
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2518 [syscall]:
syscall.Syscall(0x10?, 0xc001849b88?, 0x1000000195ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x420, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc000b2d080?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000b2d080)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000b2d080)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc000b15ba0, 0xc000b2d080)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x379e8d8, 0xc0000f7ce0}, 0xc000b15ba0, {0xc0004fab70, 0x11}, {0x7ffc53db5f50?, 0xc001849f60?}, {0x2df5b3?, 0x1c604d70d94?}, {0xc0006eed00, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:254 +0xce
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc000b15ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x66
testing.tRunner(0xc000b15ba0, 0xc00079af80)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2341
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2520 [syscall]:
syscall.Syscall6(0x1f0c45?, 0x0?, 0x8000?, 0xc000000001?, 0xb0?, 0x10?, 0x101001882c6?, 0x199ec313a28?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x560, {0xc0015e63d5?, 0x3c2b, 0x249bdf?}, 0xc001a8e8c0?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001eb18c8?, {0xc0015e63d5?, 0x8000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001eb18c8, {0xc0015e63d5, 0x3c2b, 0x3c2b})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a0ed0, {0xc0015e63d5?, 0xc000d7dd50?, 0x3ea5?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001c08a50, {0x375e660, 0xc000b02350})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc001c08a50}, {0x375e660, 0xc000b02350}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000d7de78?, {0x375e7e0, 0xc001c08a50})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc001c08a50?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc001c08a50}, {0x375e740, 0xc0005a0ed0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0006db570?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2518
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2349 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000b90a10, 0x1)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc0015d1d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x37b26e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/util/workqueue/queue.go:277 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000b90a40)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000086be0, {0x3760080, 0xc001986000}, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000086be0, 0x3b9aca00, 0x0, 0x1, 0xc000078460)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.1/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2363
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.1/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2460 [syscall, 3 minutes]:
syscall.Syscall6(0x1f09dd?, 0x199a68f0a28?, 0xc00184dc41?, 0xc001883f80?, 0x10?, 0x10?, 0x100001882c6?, 0x199ebf01cc8?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x670, {0xc0015809f9?, 0x207, 0x249bdf?}, 0xc001867480?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001774008?, {0xc0015809f9?, 0x400?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001774008, {0xc0015809f9, 0x207, 0x207})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005a0dd8, {0xc0015809f9?, 0xc00184dd50?, 0x6e?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc00163a390, {0x375e660, 0xc000c0a048})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x375e7e0, 0xc00163a390}, {0x375e660, 0xc000c0a048}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc00184de78?, {0x375e7e0, 0xc00163a390})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x511b2d0?, {0x375e7e0?, 0xc00163a390?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x375e7e0, 0xc00163a390}, {0x375e740, 0xc0005a0dd8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0006da4d0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2459
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                    

Test pass (171/214)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.27
4 TestDownloadOnly/v1.20.0/preload-exists 0.05
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.25
9 TestDownloadOnly/v1.20.0/DeleteAll 0.57
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.82
12 TestDownloadOnly/v1.32.1/json-events 9.87
13 TestDownloadOnly/v1.32.1/preload-exists 0
16 TestDownloadOnly/v1.32.1/kubectl 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.21
18 TestDownloadOnly/v1.32.1/DeleteAll 0.8
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 0.56
21 TestBinaryMirror 6.35
22 TestOffline 489.15
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.24
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.25
27 TestAddons/Setup 410.09
29 TestAddons/serial/Volcano 63
31 TestAddons/serial/GCPAuth/Namespaces 0.29
32 TestAddons/serial/GCPAuth/FakeCredentials 11.38
35 TestAddons/parallel/Registry 31.88
36 TestAddons/parallel/Ingress 59.94
37 TestAddons/parallel/InspektorGadget 25.02
38 TestAddons/parallel/MetricsServer 20.44
40 TestAddons/parallel/CSI 65.32
41 TestAddons/parallel/Headlamp 53.34
42 TestAddons/parallel/CloudSpanner 21.32
43 TestAddons/parallel/LocalPath 82
44 TestAddons/parallel/NvidiaDevicePlugin 20.08
45 TestAddons/parallel/Yakd 25.42
47 TestAddons/StoppedEnableDisable 50.06
48 TestCertOptions 504.58
49 TestCertExpiration 892.92
50 TestDockerFlags 367.41
51 TestForceSystemdFlag 444.95
52 TestForceSystemdEnv 425.89
59 TestErrorSpam/start 16.64
60 TestErrorSpam/status 33.9
61 TestErrorSpam/pause 21.24
62 TestErrorSpam/unpause 21.28
63 TestErrorSpam/stop 58.91
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 212.22
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 117.45
70 TestFunctional/serial/KubeContext 0.11
71 TestFunctional/serial/KubectlGetPods 0.2
74 TestFunctional/serial/CacheCmd/cache/add_remote 24.26
75 TestFunctional/serial/CacheCmd/cache/add_local 9.53
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.24
77 TestFunctional/serial/CacheCmd/cache/list 0.23
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.54
79 TestFunctional/serial/CacheCmd/cache/cache_reload 33.27
80 TestFunctional/serial/CacheCmd/cache/delete 0.48
81 TestFunctional/serial/MinikubeKubectlCmd 0.45
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.36
83 TestFunctional/serial/ExtraConfig 120.82
84 TestFunctional/serial/ComponentHealth 0.18
85 TestFunctional/serial/LogsCmd 7.86
86 TestFunctional/serial/LogsFileCmd 9.88
87 TestFunctional/serial/InvalidService 19.29
89 TestFunctional/parallel/ConfigCmd 2.04
93 TestFunctional/parallel/StatusCmd 37.78
97 TestFunctional/parallel/ServiceCmdConnect 41.44
98 TestFunctional/parallel/AddonsCmd 0.65
99 TestFunctional/parallel/PersistentVolumeClaim 44.92
101 TestFunctional/parallel/SSHCmd 20.06
102 TestFunctional/parallel/CpCmd 51.9
103 TestFunctional/parallel/MySQL 55.41
104 TestFunctional/parallel/FileSync 9.28
105 TestFunctional/parallel/CertSync 55.35
109 TestFunctional/parallel/NodeLabels 0.17
111 TestFunctional/parallel/NonActiveRuntimeDisabled 9.08
113 TestFunctional/parallel/License 1.24
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.56
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.59
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 20.33
126 TestFunctional/parallel/ServiceCmd/List 12.51
127 TestFunctional/parallel/ProfileCmd/profile_not_create 12.9
128 TestFunctional/parallel/ServiceCmd/JSONOutput 13.1
129 TestFunctional/parallel/ProfileCmd/profile_list 13.1
131 TestFunctional/parallel/ProfileCmd/profile_json_output 13.2
133 TestFunctional/parallel/DockerEnv/powershell 39.24
135 TestFunctional/parallel/UpdateContextCmd/no_changes 2.35
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.27
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.33
138 TestFunctional/parallel/Version/short 0.25
139 TestFunctional/parallel/Version/components 7.29
140 TestFunctional/parallel/ImageCommands/ImageListShort 7.28
141 TestFunctional/parallel/ImageCommands/ImageListTable 7.02
142 TestFunctional/parallel/ImageCommands/ImageListJson 7.12
143 TestFunctional/parallel/ImageCommands/ImageListYaml 7.33
144 TestFunctional/parallel/ImageCommands/ImageBuild 26.04
145 TestFunctional/parallel/ImageCommands/Setup 3.44
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 16.05
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 14
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 14.75
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 6.8
150 TestFunctional/parallel/ImageCommands/ImageRemove 13.2
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 13.56
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.12
153 TestFunctional/delete_echo-server_images 0.18
154 TestFunctional/delete_my-image_image 0.08
155 TestFunctional/delete_minikube_cached_images 0.07
160 TestMultiControlPlane/serial/StartCluster 673.32
161 TestMultiControlPlane/serial/DeployApp 14.2
163 TestMultiControlPlane/serial/AddWorkerNode 242.56
164 TestMultiControlPlane/serial/NodeLabels 0.17
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 44.92
166 TestMultiControlPlane/serial/CopyFile 583
167 TestMultiControlPlane/serial/StopSecondaryNode 71.46
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 35.03
172 TestImageBuild/serial/Setup 183.65
173 TestImageBuild/serial/NormalBuild 10.03
174 TestImageBuild/serial/BuildWithBuildArg 8.46
175 TestImageBuild/serial/BuildWithDockerIgnore 7.73
176 TestImageBuild/serial/BuildWithSpecifiedDockerfile 7.82
180 TestJSONOutput/start/Command 194.07
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 7.38
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 7.44
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 33.42
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.94
208 TestMainNoArgs 0.2
209 TestMinikubeProfile 502.54
212 TestMountStart/serial/StartWithMountFirst 139.33
213 TestMountStart/serial/VerifyMountFirst 8.52
214 TestMountStart/serial/StartWithMountSecond 139.11
215 TestMountStart/serial/VerifyMountSecond 8.37
216 TestMountStart/serial/DeleteFirst 27.98
217 TestMountStart/serial/VerifyMountPostDelete 8.61
218 TestMountStart/serial/Stop 27.69
219 TestMountStart/serial/RestartStopped 106.36
220 TestMountStart/serial/VerifyMountPostStop 8.49
223 TestMultiNode/serial/FreshStart2Nodes 440.16
224 TestMultiNode/serial/DeployApp2Nodes 8.63
226 TestMultiNode/serial/AddNode 222.15
227 TestMultiNode/serial/MultiNodeLabels 0.17
228 TestMultiNode/serial/ProfileList 32.91
229 TestMultiNode/serial/CopyFile 331.17
230 TestMultiNode/serial/StopNode 70.65
231 TestMultiNode/serial/StartAfterStop 176.47
236 TestPreload 497.56
237 TestScheduledStopWindows 310.46
242 TestRunningBinaryUpgrade 900.41
244 TestKubernetesUpgrade 1219.55
263 TestPause/serial/Start 191.76
267 TestNoKubernetes/serial/StartNoK8sWithVersion 0.24
269 TestPause/serial/SecondStartNoReconfiguration 312.59
270 TestPause/serial/Pause 7.25
271 TestPause/serial/VerifyStatus 11.01
272 TestPause/serial/Unpause 7.21
273 TestPause/serial/PauseAgain 7.22
274 TestPause/serial/DeletePaused 45.3
275 TestPause/serial/VerifyDeletedResources 18.77
276 TestStoppedBinaryUpgrade/Setup 0.88
277 TestStoppedBinaryUpgrade/Upgrade 843.02
278 TestStoppedBinaryUpgrade/MinikubeLogs 8.76
x
+
TestDownloadOnly/v1.20.0/json-events (15.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-052300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-052300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (15.2714364s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0210 10:21:44.480283   11764 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0210 10:21:44.530500   11764 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-052300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-052300: exit status 85 (246.3113ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-052300 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:21 UTC |          |
	|         | -p download-only-052300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:21:29
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:21:29.300373   13104 out.go:345] Setting OutFile to fd 700 ...
	I0210 10:21:29.350937   13104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:21:29.350937   13104 out.go:358] Setting ErrFile to fd 704...
	I0210 10:21:29.350937   13104 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0210 10:21:29.363744   13104 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0210 10:21:29.372905   13104 out.go:352] Setting JSON to true
	I0210 10:21:29.376364   13104 start.go:129] hostinfo: {"hostname":"minikube5","uptime":184228,"bootTime":1738998660,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:21:29.376893   13104 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:21:29.382483   13104 out.go:97] [download-only-052300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:21:29.383287   13104 notify.go:220] Checking for updates...
	W0210 10:21:29.383287   13104 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0210 10:21:29.386260   13104 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:21:29.388953   13104 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:21:29.391912   13104 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:21:29.394665   13104 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0210 10:21:29.399229   13104 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:21:29.399931   13104 driver.go:394] Setting default libvirt URI to qemu:///system
	I0210 10:21:34.285817   13104 out.go:97] Using the hyperv driver based on user configuration
	I0210 10:21:34.285817   13104 start.go:297] selected driver: hyperv
	I0210 10:21:34.285817   13104 start.go:901] validating driver "hyperv" against <nil>
	I0210 10:21:34.286397   13104 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0210 10:21:34.330642   13104 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0210 10:21:34.331550   13104 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0210 10:21:34.331882   13104 cni.go:84] Creating CNI manager for ""
	I0210 10:21:34.331882   13104 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0210 10:21:34.332125   13104 start.go:340] cluster config:
	{Name:download-only-052300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-052300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0210 10:21:34.332999   13104 iso.go:125] acquiring lock: {Name:mkae681ee414e9275e9685c6bbf5080b17ead976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0210 10:21:34.335607   13104 out.go:97] Downloading VM boot image ...
	I0210 10:21:34.336247   13104 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0210 10:21:37.953334   13104 out.go:97] Starting "download-only-052300" primary control-plane node in "download-only-052300" cluster
	I0210 10:21:37.953334   13104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0210 10:21:38.002396   13104 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0210 10:21:38.002479   13104 cache.go:56] Caching tarball of preloaded images
	I0210 10:21:38.002546   13104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0210 10:21:38.005494   13104 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0210 10:21:38.005589   13104 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0210 10:21:38.069411   13104 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0210 10:21:40.965988   13104 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0210 10:21:40.967025   13104 preload.go:254] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0210 10:21:41.861693   13104 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0210 10:21:41.862723   13104 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-052300\config.json ...
	I0210 10:21:41.863260   13104 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-052300\config.json: {Name:mk16b77354b092f90f24181766e0c80fc17565c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0210 10:21:41.863935   13104 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0210 10:21:41.864813   13104 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-052300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-052300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-052300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (9.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-609200 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-609200 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv: (9.8735436s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (9.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0210 10:21:56.044331   11764 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0210 10:21:56.045801   11764 preload.go:146] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
--- PASS: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-609200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-609200: exit status 85 (214.826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-052300 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:21 UTC |                     |
	|         | -p download-only-052300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:21 UTC | 10 Feb 25 10:21 UTC |
	| delete  | -p download-only-052300        | download-only-052300 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:21 UTC | 10 Feb 25 10:21 UTC |
	| start   | -o=json --download-only        | download-only-609200 | minikube5\jenkins | v1.35.0 | 10 Feb 25 10:21 UTC |                     |
	|         | -p download-only-609200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/02/10 10:21:46
	Running on machine: minikube5
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0210 10:21:46.257028    1464 out.go:345] Setting OutFile to fd 796 ...
	I0210 10:21:46.303707    1464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:21:46.303707    1464 out.go:358] Setting ErrFile to fd 820...
	I0210 10:21:46.303707    1464 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:21:46.322699    1464 out.go:352] Setting JSON to true
	I0210 10:21:46.325660    1464 start.go:129] hostinfo: {"hostname":"minikube5","uptime":184245,"bootTime":1738998660,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:21:46.325735    1464 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:21:46.330022    1464 out.go:97] [download-only-609200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:21:46.330428    1464 notify.go:220] Checking for updates...
	I0210 10:21:46.331737    1464 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:21:46.334715    1464 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:21:46.337082    1464 out.go:169] MINIKUBE_LOCATION=20385
	I0210 10:21:46.339121    1464 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0210 10:21:46.343694    1464 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0210 10:21:46.343847    1464 driver.go:394] Setting default libvirt URI to qemu:///system
	
	
	* The control-plane node download-only-609200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-609200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (0.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (0.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-609200
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (0.56s)

                                                
                                    
x
+
TestBinaryMirror (6.35s)

                                                
                                                
=== RUN   TestBinaryMirror
I0210 10:21:58.912374   11764 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-045100 --alsologtostderr --binary-mirror http://127.0.0.1:55059 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-045100 --alsologtostderr --binary-mirror http://127.0.0.1:55059 --driver=hyperv: (5.7496741s)
helpers_test.go:175: Cleaning up "binary-mirror-045100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-045100
--- PASS: TestBinaryMirror (6.35s)

                                                
                                    
x
+
TestOffline (489.15s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-202100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-202100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (7m27.6505542s)
helpers_test.go:175: Cleaning up "offline-docker-202100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-202100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-202100: (41.5014041s)
--- PASS: TestOffline (489.15s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.24s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-550800
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-550800: exit status 85 (236.5339ms)

                                                
                                                
-- stdout --
	* Profile "addons-550800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-550800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.24s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-550800
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-550800: exit status 85 (253.3044ms)

                                                
                                                
-- stdout --
	* Profile "addons-550800" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-550800"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.25s)

                                                
                                    
x
+
TestAddons/Setup (410.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-550800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-550800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (6m50.0852089s)
--- PASS: TestAddons/Setup (410.09s)

                                                
                                    
x
+
TestAddons/serial/Volcano (63s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 27.6003ms
addons_test.go:807: volcano-scheduler stabilized in 27.6799ms
addons_test.go:815: volcano-admission stabilized in 27.6799ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-h6v2g" [d5a675e9-d510-43ae-af79-f1b292d206f8] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0047188s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-df7l6" [c1dc36c6-cf6a-4c40-8faa-131740c50d9b] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0058013s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-mmwz5" [86e9a2a8-d9e3-46ad-9ddc-f7f3e50f9b27] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0050397s
addons_test.go:842: (dbg) Run:  kubectl --context addons-550800 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-550800 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-550800 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5c2ce3b2-92dd-4fae-b8f4-08504dbede27] Pending
helpers_test.go:344: "test-job-nginx-0" [5c2ce3b2-92dd-4fae-b8f4-08504dbede27] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [5c2ce3b2-92dd-4fae-b8f4-08504dbede27] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0043483s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable volcano --alsologtostderr -v=1: (24.2004592s)
--- PASS: TestAddons/serial/Volcano (63.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-550800 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-550800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.29s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.38s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-550800 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-550800 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d912b9db-e891-485b-9b9d-cb745fb30782] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d912b9db-e891-485b-9b9d-cb745fb30782] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.0058683s
addons_test.go:633: (dbg) Run:  kubectl --context addons-550800 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-550800 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-550800 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-550800 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.38s)

                                                
                                    
x
+
TestAddons/parallel/Registry (31.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 8.7816ms
I0210 10:30:37.716977   11764 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0210 10:30:37.716977   11764 kapi.go:107] duration metric: took 8.4891ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-h2jgm" [1bbc79e2-2352-4014-b083-90e34955b21a] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0036126s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2l49r" [93c96f96-bdc6-47df-8760-968e85b596ff] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.0151189s
addons_test.go:331: (dbg) Run:  kubectl --context addons-550800 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-550800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-550800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.4493177s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 ip: (2.4656181s)
2025/02/10 10:30:55 [DEBUG] GET http://172.29.128.211:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable registry --alsologtostderr -v=1: (13.7504556s)
--- PASS: TestAddons/parallel/Registry (31.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (59.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-550800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-550800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-550800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2477e1a8-1e91-425e-9b16-2135cf597181] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2477e1a8-1e91-425e-9b16-2135cf597181] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.0062469s
I0210 10:31:59.832437   11764 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (8.5972047s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-550800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 ip: (2.089092s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.29.128.211
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable ingress-dns --alsologtostderr -v=1: (13.4199185s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable ingress --alsologtostderr -v=1: (20.0838442s)
--- PASS: TestAddons/parallel/Ingress (59.94s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (25.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nrthf" [5f71eaa1-0cd6-421f-aafa-33b12d737cd4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0064519s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable inspektor-gadget --alsologtostderr -v=1: (19.0083474s)
--- PASS: TestAddons/parallel/InspektorGadget (25.02s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.44s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 8.0094ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xh2nb" [a2f0eea7-f4db-4ee5-bda1-6772afb96827] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0051773s
addons_test.go:402: (dbg) Run:  kubectl --context addons-550800 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable metrics-server --alsologtostderr -v=1: (13.7366803s)
--- PASS: TestAddons/parallel/MetricsServer (20.44s)

                                                
                                    
x
+
TestAddons/parallel/CSI (65.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.4891ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-550800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-550800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9e9f2b91-7478-4fc2-bda9-51fd0eabf0f0] Pending
helpers_test.go:344: "task-pv-pod" [9e9f2b91-7478-4fc2-bda9-51fd0eabf0f0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9e9f2b91-7478-4fc2-bda9-51fd0eabf0f0] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0064549s
addons_test.go:511: (dbg) Run:  kubectl --context addons-550800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-550800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-550800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-550800 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-550800 delete pod task-pv-pod: (1.1097028s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-550800 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-550800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-550800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [59e16df0-8927-4b86-9861-ce5726b9bd6d] Pending
helpers_test.go:344: "task-pv-pod-restore" [59e16df0-8927-4b86-9861-ce5726b9bd6d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [59e16df0-8927-4b86-9861-ce5726b9bd6d] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0063976s
addons_test.go:553: (dbg) Run:  kubectl --context addons-550800 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-550800 delete pod task-pv-pod-restore: (1.0214533s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-550800 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-550800 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable volumesnapshots --alsologtostderr -v=1: (15.3652133s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.895723s)
--- PASS: TestAddons/parallel/CSI (65.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (53.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-550800 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-550800 --alsologtostderr -v=1: (16.0634258s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-5b892" [f5680b56-61a4-499c-b091-6c189f4bcb49] Pending
helpers_test.go:344: "headlamp-5d4b5d7bd6-5b892" [f5680b56-61a4-499c-b091-6c189f4bcb49] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-5b892" [f5680b56-61a4-499c-b091-6c189f4bcb49] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.00534s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable headlamp --alsologtostderr -v=1: (19.2664804s)
--- PASS: TestAddons/parallel/Headlamp (53.34s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-2ztcc" [ccb730ac-c89f-45f9-9c95-6075c481cfb6] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0133581s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable cloud-spanner --alsologtostderr -v=1: (15.2890775s)
--- PASS: TestAddons/parallel/CloudSpanner (21.32s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-550800 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-550800 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bff008e3-e6a8-4059-9186-e8316093eda5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bff008e3-e6a8-4059-9186-e8316093eda5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bff008e3-e6a8-4059-9186-e8316093eda5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0040926s
addons_test.go:906: (dbg) Run:  kubectl --context addons-550800 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 ssh "cat /opt/local-path-provisioner/pvc-66b653f5-36ec-472f-b558-27dd47fd5830_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 ssh "cat /opt/local-path-provisioner/pvc-66b653f5-36ec-472f-b558-27dd47fd5830_default_test-pvc/file1": (9.6714989s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-550800 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-550800 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (59.829452s)
--- PASS: TestAddons/parallel/LocalPath (82.00s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (20.08s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-k9wv8" [f35f20e5-98ea-459c-b1cc-f53b65dd088d] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005202s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable nvidia-device-plugin --alsologtostderr -v=1: (14.0758758s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (20.08s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (25.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
I0210 10:30:37.708567   11764 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-59ncm" [e5584fca-c07c-48c8-9030-fcc925218b1d] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0050305s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-550800 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-550800 addons disable yakd --alsologtostderr -v=1: (19.4065463s)
--- PASS: TestAddons/parallel/Yakd (25.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (50.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-550800
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-550800: (38.7399791s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-550800
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-550800: (4.50362s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-550800
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-550800: (4.1977312s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-550800
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-550800: (2.6137358s)
--- PASS: TestAddons/StoppedEnableDisable (50.06s)

                                                
                                    
x
+
TestCertOptions (504.58s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-462500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0210 13:03:55.714665   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 13:04:42.474185   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-462500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m15.7309276s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-462500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-462500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.250678s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-462500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-462500 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-462500 -- "sudo cat /etc/kubernetes/admin.conf": (10.1492299s)
helpers_test.go:175: Cleaning up "cert-options-462500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-462500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-462500: (49.3080727s)
--- PASS: TestCertOptions (504.58s)

                                                
                                    
x
+
TestCertExpiration (892.92s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-504500 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-504500 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m24.9887473s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-504500 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-504500 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m47.0132944s)
helpers_test.go:175: Cleaning up "cert-expiration-504500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-504500
E0210 13:14:42.480903   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-504500: (40.9150584s)
--- PASS: TestCertExpiration (892.92s)

                                                
                                    
x
+
TestDockerFlags (367.41s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-773400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-773400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m9.6042655s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-773400 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-773400 ssh "sudo systemctl show docker --property=Environment --no-pager": (8.9626691s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-773400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-773400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (8.9551123s)
helpers_test.go:175: Cleaning up "docker-flags-773400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-773400
E0210 12:53:55.707588   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-773400: (39.8836885s)
--- PASS: TestDockerFlags (367.41s)

                                                
                                    
x
+
TestForceSystemdFlag (444.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-375600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-375600 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (6m30.6804509s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-375600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-375600 ssh "docker info --format {{.CgroupDriver}}": (9.198926s)
helpers_test.go:175: Cleaning up "force-systemd-flag-375600" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-375600
E0210 12:59:42.471396   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-375600: (45.0707337s)
--- PASS: TestForceSystemdFlag (444.95s)

                                                
                                    
x
+
TestForceSystemdEnv (425.89s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-565300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-565300 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m9.5374583s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-565300 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-565300 ssh "docker info --format {{.CgroupDriver}}": (9.1383535s)
helpers_test.go:175: Cleaning up "force-systemd-env-565300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-565300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-565300: (47.2153571s)
--- PASS: TestForceSystemdEnv (425.89s)

                                                
                                    
x
+
TestErrorSpam/start (16.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run: (5.4871882s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run: (5.5541691s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 start --dry-run: (5.5953562s)
--- PASS: TestErrorSpam/start (16.64s)

                                                
                                    
x
+
TestErrorSpam/status (33.9s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status: (11.8074516s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status: (11.0800803s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 status: (11.0116897s)
--- PASS: TestErrorSpam/status (33.90s)

                                                
                                    
x
+
TestErrorSpam/pause (21.24s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause: (7.305478s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause: (7.0074984s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 pause: (6.9272956s)
--- PASS: TestErrorSpam/pause (21.24s)

                                                
                                    
x
+
TestErrorSpam/unpause (21.28s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause: (7.1495587s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause: (6.9885807s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause
E0210 10:38:55.618205   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.625660   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.637640   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.659363   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.700796   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.782661   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:55.944365   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:56.266222   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:56.908602   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:38:58.190615   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 unpause: (7.1393278s)
--- PASS: TestErrorSpam/unpause (21.28s)

                                                
                                    
x
+
TestErrorSpam/stop (58.91s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop
E0210 10:39:00.752696   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:39:05.875226   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:39:16.117580   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:39:36.600728   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop: (38.3133305s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop: (10.5694363s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-637900 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-637900 stop: (10.0266771s)
--- PASS: TestErrorSpam/stop (58.91s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\11764\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (212.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-970000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0210 10:40:17.563122   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:41:39.486561   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-970000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m32.2061578s)
--- PASS: TestFunctional/serial/StartWithProxy (212.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (117.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0210 10:43:45.209981   11764 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:676: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-970000 --alsologtostderr -v=8
E0210 10:43:55.620188   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:44:23.330954   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-970000 --alsologtostderr -v=8: (1m57.4467952s)
functional_test.go:680: soft start took 1m57.4481659s for "functional-970000" cluster.
I0210 10:45:42.659047   11764 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (117.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-970000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (24.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:3.1: (8.2631228s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:3.3: (8.0516251s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cache add registry.k8s.io/pause:latest: (7.9405134s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (24.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-970000 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1765017239\001
functional_test.go:1094: (dbg) Done: docker build -t minikube-local-cache-test:functional-970000 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1765017239\001: (1.6649339s)
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache add minikube-local-cache-test:functional-970000
functional_test.go:1106: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cache add minikube-local-cache-test:functional-970000: (7.5252992s)
functional_test.go:1111: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache delete minikube-local-cache-test:functional-970000
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-970000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl images
functional_test.go:1141: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl images: (8.5355423s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (33.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1164: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.6421396s)
functional_test.go:1170: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.6040301s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cache reload: (7.4488303s)
functional_test.go:1180: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1180: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.5726404s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (33.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 kubectl -- --context functional-970000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.45s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out\kubectl.exe --context functional-970000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.36s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (120.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-970000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0210 10:48:55.624164   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-970000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m0.817432s)
functional_test.go:778: restart took 2m0.8185067s for "functional-970000" cluster.
I0210 10:49:03.131743   11764 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (120.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-970000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 logs
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 logs: (7.8635009s)
--- PASS: TestFunctional/serial/LogsCmd (7.86s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1137456503\001\logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1137456503\001\logs.txt: (9.8764449s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.88s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-970000 apply -f testdata\invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-970000
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-970000: exit status 115 (15.2974277s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.29.140.216:32014 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-970000 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (19.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 config get cpus: exit status 14 (298.9118ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 config get cpus: exit status 14 (257.3369ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (37.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 status
functional_test.go:871: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 status: (12.3635723s)
functional_test.go:877: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:877: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (13.5695645s)
functional_test.go:889: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 status -o json
functional_test.go:889: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 status -o json: (11.8415871s)
--- PASS: TestFunctional/parallel/StatusCmd (37.78s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (41.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-970000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-970000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-444zq" [911552f1-2c94-47e0-9854-716d28886158] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-444zq" [911552f1-2c94-47e0-9854-716d28886158] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 24.0048986s
functional_test.go:1666: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service hello-node-connect --url
functional_test.go:1666: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 service hello-node-connect --url: (17.050329s)
functional_test.go:1672: found endpoint for hello-node-connect: http://172.29.140.216:30852
functional_test.go:1692: http://172.29.140.216:30852: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-444zq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.29.140.216:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.29.140.216:30852
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (41.44s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e4d93170-4153-4b93-8add-41b2c45a27d6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0043381s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-970000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-970000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-970000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-970000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ba3cb84-dab0-4f75-b8fb-ff8e1f1bfab2] Pending
helpers_test.go:344: "sp-pod" [3ba3cb84-dab0-4f75-b8fb-ff8e1f1bfab2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ba3cb84-dab0-4f75-b8fb-ff8e1f1bfab2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.0052406s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-970000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-970000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-970000 delete -f testdata/storage-provisioner/pod.yaml: (2.7540527s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-970000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a1e673a8-202d-4851-9050-f62c918e1cd4] Pending
helpers_test.go:344: "sp-pod" [a1e673a8-202d-4851-9050-f62c918e1cd4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a1e673a8-202d-4851-9050-f62c918e1cd4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0064529s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-970000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (20.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "echo hello"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "echo hello": (10.5169333s)
functional_test.go:1759: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "cat /etc/hostname"
functional_test.go:1759: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "cat /etc/hostname": (9.5415074s)
--- PASS: TestFunctional/parallel/SSHCmd (20.06s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (51.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.2082161s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /home/docker/cp-test.txt": (9.7261348s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cp functional-970000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd3944798500\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cp functional-970000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd3944798500\001\cp-test.txt: (8.9119292s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /home/docker/cp-test.txt": (8.7635518s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (6.828877s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh -n functional-970000 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.4602142s)
--- PASS: TestFunctional/parallel/CpCmd (51.90s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (55.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-970000 replace --force -f testdata\mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-dh7x5" [ce4f68d9-bd15-4e52-9f3a-1c83681a17ea] Pending
helpers_test.go:344: "mysql-58ccfd96bb-dh7x5" [ce4f68d9-bd15-4e52-9f3a-1c83681a17ea] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-dh7x5" [ce4f68d9-bd15-4e52-9f3a-1c83681a17ea] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 44.0054407s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;": exit status 1 (259.0565ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:52:43.223244   11764 retry.go:31] will retry after 826.38362ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;": exit status 1 (251.7291ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:52:44.308455   11764 retry.go:31] will retry after 2.048305782s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;": exit status 1 (311.0641ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:52:46.674731   11764 retry.go:31] will retry after 1.733148771s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;": exit status 1 (268.306ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0210 10:52:48.683235   11764 retry.go:31] will retry after 4.98546725s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-970000 exec mysql-58ccfd96bb-dh7x5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (55.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (9.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/11764/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/test/nested/copy/11764/hosts"
functional_test.go:1948: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/test/nested/copy/11764/hosts": (9.2778058s)
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (9.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (55.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/11764.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/11764.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/11764.pem": (9.2802314s)
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/11764.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /usr/share/ca-certificates/11764.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /usr/share/ca-certificates/11764.pem": (9.4283869s)
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/51391683.0": (9.3080347s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/117642.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/117642.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/117642.pem": (9.3088133s)
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/117642.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /usr/share/ca-certificates/117642.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /usr/share/ca-certificates/117642.pem": (8.9140043s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.112547s)
--- PASS: TestFunctional/parallel/CertSync (55.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-970000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 ssh "sudo systemctl is-active crio": exit status 1 (9.0771097s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2305: (dbg) Done: out/minikube-windows-amd64.exe license: (1.2255238s)
--- PASS: TestFunctional/parallel/License (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 9520: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 1840: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-970000 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [44b72f7f-f1c5-474f-88a5-d84238040896] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [44b72f7f-f1c5-474f-88a5-d84238040896] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.0052669s
I0210 10:50:04.497822   11764 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-970000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7300: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-970000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-970000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-4vqsl" [2b939036-5d7a-4efa-98a4-b5350341dd97] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-4vqsl" [2b939036-5d7a-4efa-98a4-b5350341dd97] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.0053109s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service list
functional_test.go:1476: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 service list: (12.5079466s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (12.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.5912073s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (12.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 service list -o json: (13.1005482s)
functional_test.go:1511: Took "13.1006077s" to run "out/minikube-windows-amd64.exe -p functional-970000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1327: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.8083456s)
functional_test.go:1332: Took "12.808885s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1346: Took "295.6318ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1378: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (12.9468529s)
functional_test.go:1383: Took "12.9477552s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1396: Took "252.0966ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (39.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:516: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-970000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-970000"
functional_test.go:516: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-970000 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-970000": (25.7594391s)
functional_test.go:539: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-970000 docker-env | Invoke-Expression ; docker images"
functional_test.go:539: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-970000 docker-env | Invoke-Expression ; docker images": (13.4674192s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (39.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2: (2.3497831s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2: (2.2719292s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 update-context --alsologtostderr -v=2: (2.3286878s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 version -o=json --components: (7.288667s)
--- PASS: TestFunctional/parallel/Version/components (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls --format short --alsologtostderr: (7.2843756s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-970000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-970000
docker.io/kicbase/echo-server:functional-970000
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-970000 image ls --format short --alsologtostderr:
I0210 10:53:39.112620    7532 out.go:345] Setting OutFile to fd 1468 ...
I0210 10:53:39.194623    7532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:39.194623    7532 out.go:358] Setting ErrFile to fd 780...
I0210 10:53:39.194623    7532 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:39.210615    7532 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:39.210615    7532 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:39.211608    7532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:41.473615    7532 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:41.473615    7532 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:41.484627    7532 ssh_runner.go:195] Run: systemctl --version
I0210 10:53:41.484627    7532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:43.623135    7532 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:43.623135    7532 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:43.623135    7532 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-970000 ).networkadapters[0]).ipaddresses[0]
I0210 10:53:46.084935    7532 main.go:141] libmachine: [stdout =====>] : 172.29.140.216

                                                
                                                
I0210 10:53:46.084935    7532 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:46.085858    7532 sshutil.go:53] new ssh client: &{IP:172.29.140.216 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-970000\id_rsa Username:docker}
I0210 10:53:46.184380    7532 ssh_runner.go:235] Completed: systemctl --version: (4.6996278s)
I0210 10:53:46.193882    7532 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls --format table --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls --format table --alsologtostderr: (7.02305s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-970000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/minikube-local-cache-test | functional-970000 | f4a98ef3fd15f | 30B    |
| docker.io/library/nginx                     | alpine            | d41a14a4ecff9 | 47.9MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 97662d24417b3 | 192MB  |
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| docker.io/kicbase/echo-server               | functional-970000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-970000 image ls --format table --alsologtostderr:
I0210 10:53:46.431210   13268 out.go:345] Setting OutFile to fd 1156 ...
I0210 10:53:46.527777   13268 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:46.527777   13268 out.go:358] Setting ErrFile to fd 1448...
I0210 10:53:46.527777   13268 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:46.541738   13268 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:46.542591   13268 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:46.543339   13268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:48.573945   13268 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:48.573945   13268 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:48.582946   13268 ssh_runner.go:195] Run: systemctl --version
I0210 10:53:48.582946   13268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:50.680522   13268 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:50.680522   13268 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:50.680620   13268 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-970000 ).networkadapters[0]).ipaddresses[0]
I0210 10:53:53.146465   13268 main.go:141] libmachine: [stdout =====>] : 172.29.140.216

                                                
                                                
I0210 10:53:53.146538   13268 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:53.146748   13268 sshutil.go:53] new ssh client: &{IP:172.29.140.216 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-970000\id_rsa Username:docker}
I0210 10:53:53.257065   13268 ssh_runner.go:235] Completed: systemctl --version: (4.6740667s)
I0210 10:53:53.265565   13268 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls --format json --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls --format json --alsologtostderr: (7.1157013s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-970000 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b7
8f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-970000"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.
7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"f4a98ef3fd15f22cac1c39f3ff807bf60ebd215096f8e017f8b54b9a59a55a80","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-970000"],"size":"30"},{"id":"d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47900000"},{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-970000 image ls --format json --alsologtostderr:
I0210 10:53:46.391204    5716 out.go:345] Setting OutFile to fd 1504 ...
I0210 10:53:46.449208    5716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:46.449208    5716 out.go:358] Setting ErrFile to fd 1412...
I0210 10:53:46.449208    5716 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:46.463205    5716 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:46.463205    5716 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:46.464203    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:48.517012    5716 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:48.517012    5716 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:48.525011    5716 ssh_runner.go:195] Run: systemctl --version
I0210 10:53:48.526010    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:50.678177    5716 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:50.678177    5716 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:50.678177    5716 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-970000 ).networkadapters[0]).ipaddresses[0]
I0210 10:53:53.199494    5716 main.go:141] libmachine: [stdout =====>] : 172.29.140.216

                                                
                                                
I0210 10:53:53.200037    5716 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:53.200037    5716 sshutil.go:53] new ssh client: &{IP:172.29.140.216 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-970000\id_rsa Username:docker}
I0210 10:53:53.309694    5716 ssh_runner.go:235] Completed: systemctl --version: (4.7846301s)
I0210 10:53:53.315698    5716 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0210 10:53:55.626301   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls --format yaml --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls --format yaml --alsologtostderr: (7.3281572s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-970000 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: f4a98ef3fd15f22cac1c39f3ff807bf60ebd215096f8e017f8b54b9a59a55a80
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-970000
size: "30"
- id: d41a14a4ecff96bdae6253ad2f58d8f258786db438307846081e8d835b984111
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47900000"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 97662d24417b316f60607afbca9f226a2ba58f09d642f27b8e197a89859ddc8e
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-970000
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-970000 image ls --format yaml --alsologtostderr:
I0210 10:53:39.111614   13020 out.go:345] Setting OutFile to fd 1516 ...
I0210 10:53:39.235075   13020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:39.235075   13020 out.go:358] Setting ErrFile to fd 1156...
I0210 10:53:39.235075   13020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:39.249319   13020 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:39.249888   13020 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:39.250883   13020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:41.474625   13020 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:41.474625   13020 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:41.484627   13020 ssh_runner.go:195] Run: systemctl --version
I0210 10:53:41.484627   13020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:43.620649   13020 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:43.620689   13020 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:43.620907   13020 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-970000 ).networkadapters[0]).ipaddresses[0]
I0210 10:53:46.116186   13020 main.go:141] libmachine: [stdout =====>] : 172.29.140.216

                                                
                                                
I0210 10:53:46.116186   13020 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:46.116186   13020 sshutil.go:53] new ssh client: &{IP:172.29.140.216 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-970000\id_rsa Username:docker}
I0210 10:53:46.224841   13020 ssh_runner.go:235] Completed: systemctl --version: (4.7401619s)
I0210 10:53:46.235566   13020 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (26.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-970000 ssh pgrep buildkitd: exit status 1 (9.3496839s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image build -t localhost/my-image:functional-970000 testdata\build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image build -t localhost/my-image:functional-970000 testdata\build --alsologtostderr: (10.0408727s)
functional_test.go:340: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-970000 image build -t localhost/my-image:functional-970000 testdata\build --alsologtostderr:
I0210 10:53:48.461008    6148 out.go:345] Setting OutFile to fd 780 ...
I0210 10:53:48.549213    6148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:48.549213    6148 out.go:358] Setting ErrFile to fd 1504...
I0210 10:53:48.549213    6148 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 10:53:48.565893    6148 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:48.587950    6148 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0210 10:53:48.588950    6148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:50.679614    6148 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:50.679614    6148 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:50.690666    6148 ssh_runner.go:195] Run: systemctl --version
I0210 10:53:50.690666    6148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-970000 ).state
I0210 10:53:52.770724    6148 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0210 10:53:52.770724    6148 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:52.770724    6148 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-970000 ).networkadapters[0]).ipaddresses[0]
I0210 10:53:55.153524    6148 main.go:141] libmachine: [stdout =====>] : 172.29.140.216

                                                
                                                
I0210 10:53:55.153524    6148 main.go:141] libmachine: [stderr =====>] : 
I0210 10:53:55.153729    6148 sshutil.go:53] new ssh client: &{IP:172.29.140.216 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-970000\id_rsa Username:docker}
I0210 10:53:55.257350    6148 ssh_runner.go:235] Completed: systemctl --version: (4.5666332s)
I0210 10:53:55.257350    6148 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1094453401.tar
I0210 10:53:55.265919    6148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0210 10:53:55.294821    6148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1094453401.tar
I0210 10:53:55.302170    6148 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1094453401.tar: stat -c "%s %y" /var/lib/minikube/build/build.1094453401.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1094453401.tar': No such file or directory
I0210 10:53:55.302170    6148 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1094453401.tar --> /var/lib/minikube/build/build.1094453401.tar (3072 bytes)
I0210 10:53:55.355038    6148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1094453401
I0210 10:53:55.383250    6148 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1094453401 -xf /var/lib/minikube/build/build.1094453401.tar
I0210 10:53:55.400171    6148 docker.go:360] Building image: /var/lib/minikube/build/build.1094453401
I0210 10:53:55.407337    6148 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-970000 /var/lib/minikube/build/build.1094453401
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:498bfcb6b2a1b08e1b7917fedf218a222dc7ea32b0da2fbfa1598c9485a6b5c5 done
#8 naming to localhost/my-image:functional-970000 0.0s done
#8 DONE 0.2s
I0210 10:53:58.287090    6148 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-970000 /var/lib/minikube/build/build.1094453401: (2.8797206s)
I0210 10:53:58.296409    6148 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1094453401
I0210 10:53:58.326783    6148 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1094453401.tar
I0210 10:53:58.347341    6148 build_images.go:217] Built localhost/my-image:functional-970000 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.1094453401.tar
I0210 10:53:58.347520    6148 build_images.go:133] succeeded building to: functional-970000
I0210 10:53:58.347520    6148 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.6474871s)
E0210 10:55:18.700751   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (26.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (3.3323204s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-970000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr: (9.3373198s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.7139265s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr: (7.3354113s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.6597194s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (14.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-970000
functional_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image load --daemon kicbase/echo-server:functional-970000 --alsologtostderr: (7.2807322s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.6643246s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (14.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image save kicbase/echo-server:functional-970000 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image save kicbase/echo-server:functional-970000 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (6.796892s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (6.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (13.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image rm kicbase/echo-server:functional-970000 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image rm kicbase/echo-server:functional-970000 --alsologtostderr: (6.5577079s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.643869s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (13.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (13.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (6.9597679s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image ls: (6.6027195s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (13.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-970000
functional_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-970000 image save --daemon kicbase/echo-server:functional-970000 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p functional-970000 image save --daemon kicbase/echo-server:functional-970000 --alsologtostderr: (6.9510898s)
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-970000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-970000
--- PASS: TestFunctional/delete_echo-server_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-970000
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-970000
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (673.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-335100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0210 10:58:55.630701   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.390752   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.398753   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.411745   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.434755   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.477805   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.559523   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:42.721590   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:43.043965   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:43.686043   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:44.968315   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:47.531301   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 10:59:52.653875   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:00:02.896868   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:00:23.378587   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:01:04.341421   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:02:26.265673   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:03:55.634027   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:04:42.393625   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:05:10.109605   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-335100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m39.3503648s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: (33.9642533s)
--- PASS: TestMultiControlPlane/serial/StartCluster (673.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (14.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- rollout status deployment/busybox: (5.0273344s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- nslookup kubernetes.io: (1.9599247s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- nslookup kubernetes.io: (1.6702987s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-5px7z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-r8blr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-335100 -- exec busybox-58667487b6-vq9s4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (14.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (242.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-335100 -v=7 --alsologtostderr
E0210 11:09:42.397995   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:11:58.714684   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-335100 -v=7 --alsologtostderr: (3m17.038037s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: (45.5234533s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (242.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-335100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (44.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0210 11:13:55.640620   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (44.9214293s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (44.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (583s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status --output json -v=7 --alsologtostderr
E0210 11:14:42.401844   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 status --output json -v=7 --alsologtostderr: (44.2630753s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100:/home/docker/cp-test.txt: (8.9572626s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt": (8.9001655s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100.txt: (8.7894893s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt": (8.8357841s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100_ha-335100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100_ha-335100-m02.txt: (15.669796s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt": (8.8764466s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m02.txt": (8.8043282s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100_ha-335100-m03.txt
E0210 11:16:05.479271   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100_ha-335100-m03.txt: (15.6125386s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt": (8.9205199s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m03.txt": (8.8726174s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100_ha-335100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100_ha-335100-m04.txt: (15.687835s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test.txt": (8.9321961s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100_ha-335100-m04.txt": (8.8599747s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m02:/home/docker/cp-test.txt: (8.9500489s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt": (8.9167348s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m02.txt: (8.8223366s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt": (8.7080508s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m02_ha-335100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m02_ha-335100.txt: (15.2490468s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt": (8.6208871s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100.txt": (8.6624302s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100-m02_ha-335100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100-m02_ha-335100-m03.txt: (15.2307996s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt": (8.7261045s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100-m03.txt": (8.6737356s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100-m02_ha-335100-m04.txt
E0210 11:18:55.643642   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m02:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100-m02_ha-335100-m04.txt: (15.0480531s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test.txt": (8.6631733s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100-m02_ha-335100-m04.txt": (8.6722092s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m03:/home/docker/cp-test.txt: (8.8084161s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt": (8.705798s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m03.txt
E0210 11:19:42.405218   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m03.txt: (8.61071s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt": (8.7554396s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m03_ha-335100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m03_ha-335100.txt: (15.2390058s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt": (8.730301s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100.txt": (8.690285s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt: (15.3965465s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt": (8.8069496s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100-m02.txt": (9.0353135s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m03:/home/docker/cp-test.txt ha-335100-m04:/home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt: (15.6780538s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test.txt": (8.9821855s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test_ha-335100-m03_ha-335100-m04.txt": (9.5100704s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp testdata\cp-test.txt ha-335100-m04:/home/docker/cp-test.txt: (9.1364682s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt": (8.8515312s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3993587717\001\cp-test_ha-335100-m04.txt: (8.8427789s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt": (8.7601209s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m04_ha-335100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100:/home/docker/cp-test_ha-335100-m04_ha-335100.txt: (15.4956859s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt": (8.6610334s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100.txt": (8.8917269s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100-m02:/home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt: (15.5318546s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt": (8.7561861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m02 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100-m02.txt": (8.8599724s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 cp ha-335100-m04:/home/docker/cp-test.txt ha-335100-m03:/home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt: (15.5045098s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m04 "sudo cat /home/docker/cp-test.txt": (8.7748702s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 ssh -n ha-335100-m03 "sudo cat /home/docker/cp-test_ha-335100-m04_ha-335100-m03.txt": (9.0343269s)
--- PASS: TestMultiControlPlane/serial/CopyFile (583.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (71.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 node stop m02 -v=7 --alsologtostderr
E0210 11:23:55.648065   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-335100 node stop m02 -v=7 --alsologtostderr: (35.1427798s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr
E0210 11:24:42.407865   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-335100 status -v=7 --alsologtostderr: exit status 7 (36.3139455s)

                                                
                                                
-- stdout --
	ha-335100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-335100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-335100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 11:24:28.850312   11244 out.go:345] Setting OutFile to fd 1460 ...
	I0210 11:24:28.911767   11244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:24:28.911767   11244 out.go:358] Setting ErrFile to fd 1544...
	I0210 11:24:28.911767   11244 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 11:24:28.926913   11244 out.go:352] Setting JSON to false
	I0210 11:24:28.926913   11244 mustload.go:65] Loading cluster: ha-335100
	I0210 11:24:28.926913   11244 notify.go:220] Checking for updates...
	I0210 11:24:28.928037   11244 config.go:182] Loaded profile config "ha-335100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 11:24:28.928037   11244 status.go:174] checking status of ha-335100 ...
	I0210 11:24:28.928886   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:24:31.072078   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:31.072190   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:31.072190   11244 status.go:371] ha-335100 host status = "Running" (err=<nil>)
	I0210 11:24:31.072190   11244 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:24:31.073078   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:24:33.210986   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:33.211127   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:33.211127   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:24:35.715412   11244 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:24:35.716350   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:35.716350   11244 host.go:66] Checking if "ha-335100" exists ...
	I0210 11:24:35.725189   11244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:24:35.725189   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100 ).state
	I0210 11:24:37.785597   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:37.785597   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:37.786315   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100 ).networkadapters[0]).ipaddresses[0]
	I0210 11:24:40.322802   11244 main.go:141] libmachine: [stdout =====>] : 172.29.136.99
	
	I0210 11:24:40.322802   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:40.322802   11244 sshutil.go:53] new ssh client: &{IP:172.29.136.99 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100\id_rsa Username:docker}
	I0210 11:24:40.432754   11244 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7075115s)
	I0210 11:24:40.441734   11244 ssh_runner.go:195] Run: systemctl --version
	I0210 11:24:40.460703   11244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:24:40.487086   11244 kubeconfig.go:125] found "ha-335100" server: "https://172.29.143.254:8443"
	I0210 11:24:40.487086   11244 api_server.go:166] Checking apiserver status ...
	I0210 11:24:40.497570   11244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:24:40.530556   11244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2187/cgroup
	W0210 11:24:40.549268   11244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:24:40.557259   11244 ssh_runner.go:195] Run: ls
	I0210 11:24:40.564229   11244 api_server.go:253] Checking apiserver healthz at https://172.29.143.254:8443/healthz ...
	I0210 11:24:40.571939   11244 api_server.go:279] https://172.29.143.254:8443/healthz returned 200:
	ok
	I0210 11:24:40.571939   11244 status.go:463] ha-335100 apiserver status = Running (err=<nil>)
	I0210 11:24:40.571939   11244 status.go:176] ha-335100 status: &{Name:ha-335100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:24:40.571939   11244 status.go:174] checking status of ha-335100-m02 ...
	I0210 11:24:40.572559   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m02 ).state
	I0210 11:24:42.562782   11244 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 11:24:42.563591   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:42.563653   11244 status.go:371] ha-335100-m02 host status = "Stopped" (err=<nil>)
	I0210 11:24:42.563653   11244 status.go:384] host is not running, skipping remaining checks
	I0210 11:24:42.563653   11244 status.go:176] ha-335100-m02 status: &{Name:ha-335100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:24:42.563653   11244 status.go:174] checking status of ha-335100-m03 ...
	I0210 11:24:42.564404   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:24:44.622608   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:44.622608   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:44.622692   11244 status.go:371] ha-335100-m03 host status = "Running" (err=<nil>)
	I0210 11:24:44.622692   11244 host.go:66] Checking if "ha-335100-m03" exists ...
	I0210 11:24:44.622860   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:24:46.646804   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:46.647685   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:46.647685   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:24:49.130582   11244 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:24:49.130582   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:49.130582   11244 host.go:66] Checking if "ha-335100-m03" exists ...
	I0210 11:24:49.138870   11244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:24:49.138870   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m03 ).state
	I0210 11:24:51.193647   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:51.193647   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:51.193647   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m03 ).networkadapters[0]).ipaddresses[0]
	I0210 11:24:53.642077   11244 main.go:141] libmachine: [stdout =====>] : 172.29.143.243
	
	I0210 11:24:53.642077   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:53.643385   11244 sshutil.go:53] new ssh client: &{IP:172.29.143.243 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m03\id_rsa Username:docker}
	I0210 11:24:53.756200   11244 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6172049s)
	I0210 11:24:53.764419   11244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:24:53.792517   11244 kubeconfig.go:125] found "ha-335100" server: "https://172.29.143.254:8443"
	I0210 11:24:53.792605   11244 api_server.go:166] Checking apiserver status ...
	I0210 11:24:53.800685   11244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 11:24:53.833733   11244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2191/cgroup
	W0210 11:24:53.854029   11244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2191/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 11:24:53.862025   11244 ssh_runner.go:195] Run: ls
	I0210 11:24:53.868731   11244 api_server.go:253] Checking apiserver healthz at https://172.29.143.254:8443/healthz ...
	I0210 11:24:53.881067   11244 api_server.go:279] https://172.29.143.254:8443/healthz returned 200:
	ok
	I0210 11:24:53.881422   11244 status.go:463] ha-335100-m03 apiserver status = Running (err=<nil>)
	I0210 11:24:53.881422   11244 status.go:176] ha-335100-m03 status: &{Name:ha-335100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 11:24:53.881422   11244 status.go:174] checking status of ha-335100-m04 ...
	I0210 11:24:53.882018   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m04 ).state
	I0210 11:24:55.931296   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:55.931494   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:55.931587   11244 status.go:371] ha-335100-m04 host status = "Running" (err=<nil>)
	I0210 11:24:55.931587   11244 host.go:66] Checking if "ha-335100-m04" exists ...
	I0210 11:24:55.932266   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m04 ).state
	I0210 11:24:57.991096   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:24:57.991096   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:24:57.992003   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m04 ).networkadapters[0]).ipaddresses[0]
	I0210 11:25:00.427157   11244 main.go:141] libmachine: [stdout =====>] : 172.29.135.124
	
	I0210 11:25:00.428103   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:00.428290   11244 host.go:66] Checking if "ha-335100-m04" exists ...
	I0210 11:25:00.437036   11244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 11:25:00.437036   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-335100-m04 ).state
	I0210 11:25:02.463027   11244 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 11:25:02.463027   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:02.463102   11244 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-335100-m04 ).networkadapters[0]).ipaddresses[0]
	I0210 11:25:04.856665   11244 main.go:141] libmachine: [stdout =====>] : 172.29.135.124
	
	I0210 11:25:04.856665   11244 main.go:141] libmachine: [stderr =====>] : 
	I0210 11:25:04.857759   11244 sshutil.go:53] new ssh client: &{IP:172.29.135.124 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-335100-m04\id_rsa Username:docker}
	I0210 11:25:04.958609   11244 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.5215223s)
	I0210 11:25:04.967984   11244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 11:25:04.991105   11244 status.go:176] ha-335100-m04 status: &{Name:ha-335100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (71.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (35.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.0282649s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (35.03s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (183.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-662600 --driver=hyperv
E0210 11:32:45.494085   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-662600 --driver=hyperv: (3m3.6517328s)
--- PASS: TestImageBuild/serial/Setup (183.65s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.03s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-662600
E0210 11:33:55.654273   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-662600: (10.0284907s)
--- PASS: TestImageBuild/serial/NormalBuild (10.03s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-662600
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-662600: (8.4595737s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (7.73s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-662600
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-662600: (7.7313256s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (7.73s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-662600
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-662600: (7.8163131s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (7.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (194.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-165200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-165200 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m14.0726813s)
--- PASS: TestJSONOutput/start/Command (194.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.38s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-165200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-165200 --output=json --user=testUser: (7.3815576s)
--- PASS: TestJSONOutput/pause/Command (7.38s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-165200 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-165200 --output=json --user=testUser: (7.4386541s)
--- PASS: TestJSONOutput/unpause/Command (7.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (33.42s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-165200 --output=json --user=testUser
E0210 11:38:55.658406   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-165200 --output=json --user=testUser: (33.4207121s)
--- PASS: TestJSONOutput/stop/Command (33.42s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.94s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-344400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-344400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (263.6352ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ec57c529-ced7-467e-9f30-56bc017680a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-344400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3cbf5e1d-4128-41b8-b126-5087dd2ad919","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"0103877f-6874-46c5-94a1-62dc3e7b42d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fb4fd495-bd69-4aee-b964-56799a98a7f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d29269e2-eebd-44a4-b919-a692ae849397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20385"}}
	{"specversion":"1.0","id":"4aa65650-c792-46b6-85e5-e0f11ce668d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8299ad89-642b-42a0-8d34-9b1fedf4929b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-344400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-344400
--- PASS: TestErrorJSONOutput (0.94s)

                                                
                                    
x
+
TestMainNoArgs (0.2s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.20s)

                                                
                                    
x
+
TestMinikubeProfile (502.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-677800 --driver=hyperv
E0210 11:39:42.417561   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-677800 --driver=hyperv: (3m6.9397638s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-677800 --driver=hyperv
E0210 11:43:55.661365   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:44:42.421575   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:45:18.742125   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-677800 --driver=hyperv: (3m5.4497918s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-677800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (21.9681274s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-677800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (22.0707195s)
helpers_test.go:175: Cleaning up "second-677800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-677800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-677800: (44.2485177s)
helpers_test.go:175: Cleaning up "first-677800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-677800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-677800: (41.3040709s)
--- PASS: TestMinikubeProfile (502.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (139.33s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-183500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0210 11:48:55.663730   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:49:25.507569   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:49:42.424356   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-183500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m18.3313854s)
--- PASS: TestMountStart/serial/StartWithMountFirst (139.33s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.52s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-183500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-183500 ssh -- ls /minikube-host: (8.5166916s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (139.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-183500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-183500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m18.1130334s)
--- PASS: TestMountStart/serial/StartWithMountSecond (139.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host: (8.3731872s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (27.98s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-183500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-183500 --alsologtostderr -v=5: (27.984414s)
--- PASS: TestMountStart/serial/DeleteFirst (27.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.61s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host: (8.6052099s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.61s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.69s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-183500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-183500: (27.6925849s)
--- PASS: TestMountStart/serial/Stop (27.69s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (106.36s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-183500
E0210 11:53:55.667148   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:54:42.428005   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-183500: (1m45.3602885s)
--- PASS: TestMountStart/serial/RestartStopped (106.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.49s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-183500 ssh -- ls /minikube-host: (8.4945408s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.49s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (440.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-032400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0210 11:58:55.670775   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 11:59:42.431482   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:01:58.755628   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-032400 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m57.761713s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr: (22.3981556s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (440.16s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- rollout status deployment/busybox: (3.2632841s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- nslookup kubernetes.io: (1.7308986s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-4g8jw -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-032400 -- exec busybox-58667487b6-8shfg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.63s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (222.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-032400 -v 3 --alsologtostderr
E0210 12:04:42.434753   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:06:05.521352   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-032400 -v 3 --alsologtostderr: (3m9.3324069s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr: (32.8182571s)
--- PASS: TestMultiNode/serial/AddNode (222.15s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-032400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (32.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (32.9085785s)
--- PASS: TestMultiNode/serial/ProfileList (32.91s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (331.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status --output json --alsologtostderr
E0210 12:08:55.677331   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 status --output json --alsologtostderr: (33.1257868s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400:/home/docker/cp-test.txt: (8.6489208s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt": (8.5950232s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400.txt: (8.5751828s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt"
E0210 12:09:42.438435   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt": (8.5054714s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt multinode-032400-m02:/home/docker/cp-test_multinode-032400_multinode-032400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt multinode-032400-m02:/home/docker/cp-test_multinode-032400_multinode-032400-m02.txt: (15.104318s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt": (8.6622064s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test_multinode-032400_multinode-032400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test_multinode-032400_multinode-032400-m02.txt": (8.6140046s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt multinode-032400-m03:/home/docker/cp-test_multinode-032400_multinode-032400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400:/home/docker/cp-test.txt multinode-032400-m03:/home/docker/cp-test_multinode-032400_multinode-032400-m03.txt: (15.090739s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test.txt": (8.7025877s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test_multinode-032400_multinode-032400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test_multinode-032400_multinode-032400-m03.txt": (8.6296333s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400-m02:/home/docker/cp-test.txt: (8.6539356s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt": (8.6091602s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m02.txt: (8.6401829s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt": (8.7524309s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt multinode-032400:/home/docker/cp-test_multinode-032400-m02_multinode-032400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt multinode-032400:/home/docker/cp-test_multinode-032400-m02_multinode-032400.txt: (15.1701406s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt": (8.6590712s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test_multinode-032400-m02_multinode-032400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test_multinode-032400-m02_multinode-032400.txt": (8.6080024s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt multinode-032400-m03:/home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m02:/home/docker/cp-test.txt multinode-032400-m03:/home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt: (15.1166017s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test.txt": (8.5470853s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test_multinode-032400-m02_multinode-032400-m03.txt": (8.5608024s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp testdata\cp-test.txt multinode-032400-m03:/home/docker/cp-test.txt: (8.654154s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt": (8.5794466s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2256314567\001\cp-test_multinode-032400-m03.txt: (8.7000278s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt": (8.6251558s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt multinode-032400:/home/docker/cp-test_multinode-032400-m03_multinode-032400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt multinode-032400:/home/docker/cp-test_multinode-032400-m03_multinode-032400.txt: (15.1024462s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt": (8.6536987s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test_multinode-032400-m03_multinode-032400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400 "sudo cat /home/docker/cp-test_multinode-032400-m03_multinode-032400.txt": (8.6591346s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt multinode-032400-m02:/home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt
E0210 12:13:55.680462   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 cp multinode-032400-m03:/home/docker/cp-test.txt multinode-032400-m02:/home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt: (15.1722396s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m03 "sudo cat /home/docker/cp-test.txt": (8.6978557s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 ssh -n multinode-032400-m02 "sudo cat /home/docker/cp-test_multinode-032400-m03_multinode-032400-m02.txt": (8.729763s)
--- PASS: TestMultiNode/serial/CopyFile (331.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (70.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 node stop m03: (23.3602597s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status
E0210 12:14:42.440749   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-032400 status: exit status 7 (23.7168047s)

                                                
                                                
-- stdout --
	multinode-032400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-032400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-032400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-032400 status --alsologtostderr: exit status 7 (23.5676953s)

                                                
                                                
-- stdout --
	multinode-032400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-032400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-032400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 12:15:00.970983    9348 out.go:345] Setting OutFile to fd 1292 ...
	I0210 12:15:01.020694    9348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:01.020694    9348 out.go:358] Setting ErrFile to fd 1108...
	I0210 12:15:01.020694    9348 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 12:15:01.032725    9348 out.go:352] Setting JSON to false
	I0210 12:15:01.032725    9348 mustload.go:65] Loading cluster: multinode-032400
	I0210 12:15:01.032725    9348 notify.go:220] Checking for updates...
	I0210 12:15:01.033884    9348 config.go:182] Loaded profile config "multinode-032400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 12:15:01.033884    9348 status.go:174] checking status of multinode-032400 ...
	I0210 12:15:01.034465    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:15:03.007793    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:03.007868    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:03.007868    9348 status.go:371] multinode-032400 host status = "Running" (err=<nil>)
	I0210 12:15:03.007936    9348 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:15:03.008111    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:15:04.994190    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:04.994190    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:04.994190    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:15:07.343973    9348 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 12:15:07.344733    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:07.344733    9348 host.go:66] Checking if "multinode-032400" exists ...
	I0210 12:15:07.353284    9348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:15:07.353284    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400 ).state
	I0210 12:15:09.281103    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:09.281172    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:09.281257    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400 ).networkadapters[0]).ipaddresses[0]
	I0210 12:15:11.595595    9348 main.go:141] libmachine: [stdout =====>] : 172.29.136.201
	
	I0210 12:15:11.596262    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:11.596826    9348 sshutil.go:53] new ssh client: &{IP:172.29.136.201 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400\id_rsa Username:docker}
	I0210 12:15:11.694176    9348 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3407606s)
	I0210 12:15:11.704230    9348 ssh_runner.go:195] Run: systemctl --version
	I0210 12:15:11.720656    9348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:15:11.746176    9348 kubeconfig.go:125] found "multinode-032400" server: "https://172.29.136.201:8443"
	I0210 12:15:11.746176    9348 api_server.go:166] Checking apiserver status ...
	I0210 12:15:11.754421    9348 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0210 12:15:11.786288    9348 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2209/cgroup
	W0210 12:15:11.803609    9348 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2209/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0210 12:15:11.811658    9348 ssh_runner.go:195] Run: ls
	I0210 12:15:11.819034    9348 api_server.go:253] Checking apiserver healthz at https://172.29.136.201:8443/healthz ...
	I0210 12:15:11.827142    9348 api_server.go:279] https://172.29.136.201:8443/healthz returned 200:
	ok
	I0210 12:15:11.827142    9348 status.go:463] multinode-032400 apiserver status = Running (err=<nil>)
	I0210 12:15:11.827142    9348 status.go:176] multinode-032400 status: &{Name:multinode-032400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:15:11.827351    9348 status.go:174] checking status of multinode-032400-m02 ...
	I0210 12:15:11.828307    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:15:13.777422    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:13.777422    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:13.777422    9348 status.go:371] multinode-032400-m02 host status = "Running" (err=<nil>)
	I0210 12:15:13.777422    9348 host.go:66] Checking if "multinode-032400-m02" exists ...
	I0210 12:15:13.778244    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:15:15.747557    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:15.747557    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:15.747986    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:15:18.061989    9348 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:15:18.062355    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:18.062355    9348 host.go:66] Checking if "multinode-032400-m02" exists ...
	I0210 12:15:18.071975    9348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0210 12:15:18.071975    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m02 ).state
	I0210 12:15:19.993351    9348 main.go:141] libmachine: [stdout =====>] : Running
	
	I0210 12:15:19.993351    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:19.993530    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-032400-m02 ).networkadapters[0]).ipaddresses[0]
	I0210 12:15:22.323628    9348 main.go:141] libmachine: [stdout =====>] : 172.29.143.51
	
	I0210 12:15:22.323628    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:22.324657    9348 sshutil.go:53] new ssh client: &{IP:172.29.143.51 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-032400-m02\id_rsa Username:docker}
	I0210 12:15:22.422865    9348 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.350842s)
	I0210 12:15:22.432036    9348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0210 12:15:22.456406    9348 status.go:176] multinode-032400-m02 status: &{Name:multinode-032400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0210 12:15:22.456406    9348 status.go:174] checking status of multinode-032400-m03 ...
	I0210 12:15:22.457140    9348 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-032400-m03 ).state
	I0210 12:15:24.385851    9348 main.go:141] libmachine: [stdout =====>] : Off
	
	I0210 12:15:24.385851    9348 main.go:141] libmachine: [stderr =====>] : 
	I0210 12:15:24.385924    9348 status.go:371] multinode-032400-m03 host status = "Stopped" (err=<nil>)
	I0210 12:15:24.385924    9348 status.go:384] host is not running, skipping remaining checks
	I0210 12:15:24.385924    9348 status.go:176] multinode-032400-m03 status: &{Name:multinode-032400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (70.65s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (176.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 node start m03 -v=7 --alsologtostderr: (2m23.8948709s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-032400 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-032400 status -v=7 --alsologtostderr: (32.4128774s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (176.47s)

                                                
                                    
x
+
TestPreload (497.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-400400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0210 12:29:42.451192   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-400400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m12.0275504s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-400400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-400400 image pull gcr.io/k8s-minikube/busybox: (8.171706s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-400400
E0210 12:33:55.694473   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-400400: (38.1163668s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-400400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0210 12:34:42.454185   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:35:18.783211   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-400400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m31.5592549s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-400400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-400400 image list: (6.7448422s)
helpers_test.go:175: Cleaning up "test-preload-400400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-400400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-400400: (40.9337905s)
--- PASS: TestPreload (497.56s)

                                                
                                    
x
+
TestScheduledStopWindows (310.46s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-865400 --memory=2048 --driver=hyperv
E0210 12:38:55.697539   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:39:25.548804   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:39:42.458722   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-865400 --memory=2048 --driver=hyperv: (3m2.043476s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-865400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-865400 --schedule 5m: (9.7439426s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-865400 -n scheduled-stop-865400
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-865400 -n scheduled-stop-865400: exit status 1 (10.0121064s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-865400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-865400 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.7532656s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-865400 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-865400 --schedule 5s: (9.7716714s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-865400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-865400: exit status 7 (2.2463154s)

                                                
                                                
-- stdout --
	scheduled-stop-865400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-865400 -n scheduled-stop-865400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-865400 -n scheduled-stop-865400: exit status 7 (2.2429894s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-865400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-865400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-865400: (25.6465234s)
--- PASS: TestScheduledStopWindows (310.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (900.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2410075079.exe start -p running-upgrade-083700 --memory=2200 --vm-driver=hyperv
E0210 12:58:55.710680   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.2410075079.exe start -p running-upgrade-083700 --memory=2200 --vm-driver=hyperv: (6m33.1378437s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-083700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-083700 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m20.7874627s)
helpers_test.go:175: Cleaning up "running-upgrade-083700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-083700
E0210 13:12:45.575823   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-083700: (1m5.881172s)
--- PASS: TestRunningBinaryUpgrade (900.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (1219.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m20.2368161s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-202100
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-202100: (33.5700694s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-202100 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-202100 status --format={{.Host}}: exit status 7 (2.2493152s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0210 12:48:55.704522   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0210 12:49:42.464655   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (6m10.2144615s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-202100 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (263.7372ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-202100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-202100
	    minikube start -p kubernetes-upgrade-202100 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2021002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-202100 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0210 12:56:05.562359   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-202100 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (7m32.8321514s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-202100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-202100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-202100: (40.0297833s)
--- PASS: TestKubernetesUpgrade (1219.55s)

                                                
                                    
x
+
TestPause/serial/Start (191.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-785500 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-785500 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (3m11.760482s)
--- PASS: TestPause/serial/Start (191.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-189000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-189000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (243.4062ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-189000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (312.59s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-785500 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-785500 --alsologtostderr -v=1 --driver=hyperv: (5m12.5576732s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (312.59s)

                                                
                                    
x
+
TestPause/serial/Pause (7.25s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-785500 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-785500 --alsologtostderr -v=5: (7.2470732s)
--- PASS: TestPause/serial/Pause (7.25s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (11.01s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-785500 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-785500 --output=json --layout=cluster: exit status 2 (11.0127683s)

                                                
                                                
-- stdout --
	{"Name":"pause-785500","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-785500","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (11.01s)

                                                
                                    
x
+
TestPause/serial/Unpause (7.21s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-785500 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-785500 --alsologtostderr -v=5: (7.2101884s)
--- PASS: TestPause/serial/Unpause (7.21s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.22s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-785500 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-785500 --alsologtostderr -v=5: (7.2222734s)
--- PASS: TestPause/serial/PauseAgain (7.22s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (45.3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-785500 --alsologtostderr -v=5
E0210 12:51:58.796901   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\addons-550800\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-785500 --alsologtostderr -v=5: (45.2969508s)
--- PASS: TestPause/serial/DeletePaused (45.30s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (18.77s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.7628564s)
--- PASS: TestPause/serial/VerifyDeletedResources (18.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (843.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.900491658.exe start -p stopped-upgrade-063200 --memory=2200 --vm-driver=hyperv
E0210 12:54:42.469062   11764 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-970000\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.900491658.exe start -p stopped-upgrade-063200 --memory=2200 --vm-driver=hyperv: (7m20.3223618s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.900491658.exe -p stopped-upgrade-063200 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.900491658.exe -p stopped-upgrade-063200 stop: (36.287112s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-063200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-063200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m6.4124684s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (843.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (8.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-063200
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-063200: (8.7553306s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (8.76s)

                                                
                                    

Test skip (33/214)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-970000 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-970000 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 9056: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.06s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-970000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:991: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-970000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0263118s)

                                                
                                                
-- stdout --
	* [functional-970000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:50:47.651104    4852 out.go:345] Setting OutFile to fd 1756 ...
	I0210 10:50:47.730687    4852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:47.730687    4852 out.go:358] Setting ErrFile to fd 1768...
	I0210 10:50:47.730687    4852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:47.750694    4852 out.go:352] Setting JSON to false
	I0210 10:50:47.756065    4852 start.go:129] hostinfo: {"hostname":"minikube5","uptime":185987,"bootTime":1738998660,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:50:47.756303    4852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:50:47.760263    4852 out.go:177] * [functional-970000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:50:47.767392    4852 notify.go:220] Checking for updates...
	I0210 10:50:47.769403    4852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:50:47.771413    4852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:50:47.774401    4852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:50:47.777398    4852 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:50:47.779399    4852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:50:47.782396    4852 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:50:47.783399    4852 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:997: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.03s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-970000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-970000 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0348145s)

                                                
                                                
-- stdout --
	* [functional-970000] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20385
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0210 10:50:42.612400    3948 out.go:345] Setting OutFile to fd 1612 ...
	I0210 10:50:42.686974    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:42.686974    3948 out.go:358] Setting ErrFile to fd 1672...
	I0210 10:50:42.686974    3948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0210 10:50:42.711966    3948 out.go:352] Setting JSON to false
	I0210 10:50:42.715960    3948 start.go:129] hostinfo: {"hostname":"minikube5","uptime":185982,"bootTime":1738998660,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5440 Build 19045.5440","kernelVersion":"10.0.19045.5440 Build 19045.5440","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0210 10:50:42.715960    3948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0210 10:50:42.719955    3948 out.go:177] * [functional-970000] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5440 Build 19045.5440
	I0210 10:50:42.722958    3948 notify.go:220] Checking for updates...
	I0210 10:50:42.725959    3948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0210 10:50:42.728974    3948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0210 10:50:42.731979    3948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0210 10:50:42.733964    3948 out.go:177]   - MINIKUBE_LOCATION=20385
	I0210 10:50:42.735964    3948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0210 10:50:42.738965    3948 config.go:182] Loaded profile config "functional-970000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0210 10:50:42.739967    3948 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1042: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard